NVIDIA GeForce GTX 680M: Kepler GK104 Goes Mobile
by Jarred Walton on June 4, 2012 9:12 PM ESTOrigin PC spoiled the GTX 680M launch party a bit with their announcement of their new EON15-S and EON17-S notebooks this morning, but NVIDIA asked us to avoid discussing the particulars of the new mobile GPU juggernaut until the official NDA time. As you’ve probably guessed, that time is now (or 6PM PDT June 4, 2012 if you’re reading this later). NVIDIA also shared some information on upcoming Ultrabooks, which we’ll get to at the end.
NVIDIA has had their fair share of success with Kepler so far, and the GTX 680 desktop cards continue to sell out. Newegg for example currently lists 18 GTX 680 cards, but only one is currently in stock: the EVGA GTX 680 FTW comes with a decent overclock and a starting price $70 higher than the standard GTX 680. On the laptop side, we’ve already had a couple Kepler-based GK107 laptops in for review already, and graphics performance has shown a large improvement relative to the previous Fermi midrange cards.
On the high-end notebooks, so far the only Kepler GPU has been a higher clocked GK107, the GTX 660M, but increasing the core clocks will only take you so far. NVIDIA has continued to sell their previous generation GTX 570M and 580M as the GTX 670M and 675M (with a slight increase in core clocks), but clearly there was a hole at the top just waiting for the GTX 680M, and it’s now time to plug it. Below is a rundown of the three of NVIDIA’s fastest mobile GPUs to help put the GTX 680M in perspective.
NVIDIA High-End Mobile GPU Specifications | |||
GeForce GTX 680M | GeForce GTX 675M | GeForce GTX 660M | |
GPU and Process | 28nm GK104 | 40nm GF114 | 28nm GK107 |
CUDA Cores | 1344 | 384 | Up to 384 |
GPU Clock | 720MHz | 620MHz | 835MHz |
Shader Clock | - | 1240MHz | - |
Memory Eff. Clock | 3.6GHz | 3GHz | 4GHz |
Memory Bus | 256-bit | 256-bit | 128-bit |
Memory Bandwidth | 115.2GB/s | 96GB/s | 64GB/s |
Memory | Up to 4GB GDDR5 | Up to 2GB GDDR5 | Up to 2GB GDDR5 |
Just running the raw numbers here, the GTX 680M has up to 20% more memory bandwidth than GTX 675M/580M, thanks to the improved memory controller and higher RAM clocks available with Kepler. The bigger improvement however comes in the computational area: even factoring in the double-speed shader clocks, GTX 680M has potentially 103% more shader performance than its predecessor. NVIDIA gives an estimated performance improvement of up to 80% over GTX 580M, which is a huge jump in generational performance. And while Fermi on the desktop still offers potentially better performance in several compute workloads, there’s a reasonable chance that the gap won’t be quite as large on notebooks—not to mention compute generally isn’t as big of a factor for most notebook users. (And for those that need notebooks with more compute performance, there’s always the Quadro 5010M—likely to be supplemented by a new Quadro in the near future.)
Unfortunately, we’ll have to wait a bit longer to do our own in-house investigation of GeForce GTX 680M performance, as we don’t have any hardware in hand. NVIDIA did provide some performance benchmarks with a variety of games, though, and we’re going to pass along that information in the interim. As always, take such information with a grain of salt, as NVIDIA may be picking games/settings that are particularly well suited to the GTX 680M, but for many of the titles there’s a canned benchmark that should allow for “fair” comparisons.
Assuming the above chart uses the built-in benchmarks in the games that support it, we do have a few points of comparison with the Alienware M18x in GTX 580M and HD 6990M configurations. We’ll skip those, however, as the only game where we appear to run at identical settings is DiRT 3 (43.8FPS if you’re wondering). Luckily, NVIDIA has included similar performance tables in previous launches, so we do have some overlap with their GTX 580M information. First, here’s their full benchmarking page from 580M, and then we’ll summarize the points of comparison.
Tentative Gaming Performance Comparison (Using NVIDIA GTX 580M/680M Results) |
|||
GTX 680M (NVIDIA i7-3720QM) |
GTX 580M (NVIDIA i7-980X) |
Percent Increase |
|
Aliens vs. Predator | 59.7 | 39 | 53% |
Civilization V | 65.6 | 48 | 37% |
DiRT 3 | 69.5 | 43 | 62% |
Far Cry 2 | 115.6 | 79 | 46% |
Lost Planet 2 | 57.9 | 33 | 75% |
Metro 2033 | 56.2 | 40 | 41% |
Stalker: Call of Pripyat | 96.4 | 50 | 93% |
StoneGiant (DoF Off) | 67 | 46 | 46% |
StoneGiant (DoF On) | 36 | 25 | 44% |
Street Fighter IV | 165.5 | 138 | 20% |
Total War: Shogun 2 | 97.8 | 59 | 66% |
Witcher 2 High | 43.7 | 26 | 68% |
Witcher 2 Ultra | 20.1 | 10 | 101% |
Average Performance | 73.2 | 48.9 | 50% |
Even given the discrepancies between test notebooks (Clevo’s X7200 with an i7-980X compared to the i7-3720QM), given both chips have the same maximum Turbo Boost clocks (3.6GHz on i7-980X and i7-3720QM) plus the fact that we should be GPU limited and the above scores look pretty reasonable. The only games that don’t see a >40% increase are Civilization V (which has proven to be CPU limited in the past) and Street Fighter IV (which is running at >120FPS on both GPUs already). There are a few titles where we even see nearly a doubling of performance. We don’t have raw numbers, but NVIDIA is also claiming around a 15-20% average performance advantage over AMD’s Radeon 7970M—hopefully we’ll be able to do our own head-to-head in the near future.
Overall, using NVIDIA’s own numbers it looks like GTX 680M ought to be around 50% faster than GTX 580M. If that doesn’t seem like much, consider that the difference between GTX 480M and GTX 580M was only around 20% (according to NVIDIA and using 3DMark11). A 50% increase in mobile graphics performance within the same power envelope is a huge step; if Kepler manages to reduce power use at all then it will be an even bigger jump. Put another way, a single GTX 680M in the above games using NVIDIA’s own results ends up offering 86% of the performance of GTX 580M SLI, and it will definitely use a lot less power and have fewer headaches than mobile SLI.
As usual, NVIDIA had a wealth of other information to share about their product and software features, and with their latest drivers NVIDIA is adding a few new items. No, we’re not even talking about CUDA or PhysX here (though NVIDIA does at least list those as important features). Optimus also gets a plug, and just as with the 400M and 500M series, all 600M GPUs support Optimus. The difference is that this time around, instead of just Alienware supporting Optimus with their M17x R3, NVIDIA also has MSI and Clevo on board for GTX 680M Optimus.
Briefly covering the other features, Kepler adds TXAA support (Time based anti-aliasing), a frame based anti-aliasing algorithm that NVIDIA touts as providing quality near the level of 8xMSAA but with a performance hit similar to that of 2xMSAA—or alternately, even better quality for a performance hit similar to 4XMSAA. It sounds like TXAA for now will require application support, and NVIDIA provided the above slide showing some of the upcoming titles that will have native TXAA built into the game. NVIDIA also made mention of FXAA (sample based anti-aliasing), which is a full scene shader technique that can help remove jaggies with a very minor performance hit (around 4%). New with their latest drivers is the ability to force-enable FXAA on all games.
Another newer addition is Adaptive V-Sync, which sounds similar in some ways to Lucid’s Virtu MVP solution. In practice, however, it sounds like NVIDIA is simply enabling/disabling V-Sync based on the current frame rate. If a game is running at more than 60FPS, V-Sync will turn on to prevent tearing, while at <60FPS V-Sync will turn off to improve performance and reduce stuttering.
Besides GTX 680M, there should be quite a few Ultrabook announcements coming out at Computex with support for NVIDIA GPUs. We’ve already looked at Acer’s TimelineU M3, and we mentioned ASUS’ UX32A/UX32VD and Lenovo’s new U410. Ultrabooks are quickly reaching the point where they’re “fast enough” for the vast majority of users; the one area where they appear deficient is in graphics performance. Ivy Bridge and HD 4000 in a ULV chip simply aren’t able to provide the same sort of performance we find in the higher TDP chips.
That’s where NVIDIA plans on getting a lot of wins with their GT 610M (48 core Fermi) and their GT 620M (96 core Fermi); GT 620M will initially be available as a 40nm and 28nm part, but we're still trying to find out if GT 610M will also have a 28nm variant. For larger laptops, GT 610M wouldn’t make much sense, but in an Ultrabook it may be just what you need. If so, keep your eyes on our Computex 2012 and Ultrabook coverage, as there’s surely more to come.
46 Comments
View All Comments
Iketh - Monday, June 4, 2012 - link
I wish Intel would deliver their chips with the same chip packaging as Nvidia...Casper42 - Monday, June 4, 2012 - link
Huh?You mean the 2nd picture above with the 2 square chips?
Thats a BGA package and is only used when you plan to surface mount a chip to a board, ie; NON REMOVABLE.
GPUs in laptops are either integrated into the Mainboard or are delivered in a form factor called MXM that uses an edge connector.
Iketh - Wednesday, June 6, 2012 - link
maybe i mispoke, but im talking about the exposed die with the supportive padding instead of an IHSRaistlinZ - Monday, June 4, 2012 - link
I bet it will cost like $300.00 more than the 7970m too.Spunjji - Tuesday, June 5, 2012 - link
Based on the last generation, $200 wouldn't be surprising. I hope to be wrong!raghu78 - Tuesday, June 5, 2012 - link
GTX 680M costs USD 286 more than HD 7970M. close to 300 buckshttp://www.originpc.com/shop/pc/configurePrd.asp?i...
You can check using the origin pc laptop configurator
HD 7970m was found to be close to 50% faster than 675m by notebookcheck.net
http://www.notebookcheck.net/Alienware-M17x-R4-Not...
My guess is GTX 680M will beat the HD 7970M by 10%. But the price is nowhere in relation to perf. so advantage AMD.
Alexvrb - Wednesday, June 6, 2012 - link
Yeah unless money is no object. Nvidia's high-end mobile chips always did carry one heck of a premium, but ouch!marc1000 - Monday, June 4, 2012 - link
I'm waiting for a 28nm 150W desktop card from Nvidia, to choose betwen it and the AMD cards. If they take too long to launch it, they will loose a customer.thunderising - Tuesday, June 5, 2012 - link
What do you think the GTX670 is? A 300W card?puppies - Tuesday, June 5, 2012 - link
I think what he means is he wants a mid range card with a mid range price tag, of course just saying that would have been a lot simpler and the chance of it happening when 670s and 680s are in short supply and still selling out is somewhere between 0% and 0%.