Micron Begins to Sample GDDR5X Memory, Unveils Specs of Chips
by Anton Shilov on March 29, 2016 11:00 AM ESTThis past week Micron has quietly added its GDDR5X memory chips to its product catalogue and revealed that the DRAM devices are currently sampling to partners. The company also disclosed specifications of the chips they currently ship to allies and which potentially will be mass-produced later this summer. As it appears, the first samples, though running at much higher data rates than GDDR5, will not be reaching the maximum data rates initially laid out in the GDDR5X specification.
The first GDDR5X memory chips from Micron are marked as MT58K256M32JA, feature 8 Gb (1GB) capacity, and are rated to run at 10 Gb/s, 11 Gb/s and 12 Gb/s in quad data rate (QDR) mode with 16n prefetch. The chips use 1.35 V supply and I/O voltage as well as 1.8 V pump voltage (Vpp). Micron’s GDDR5X memory devices sport 32-bit interfaces and come in 190-ball BGA packages with 14×10 mm dimensions. As reported, the GDDR5X DRAMs are manufactured using 20 nm process technology, which Micron has been using for over a year now.
The GDDR5X memory standard, as you might remember from our previous reports, is largely based on the GDDR5 specification, but has three crucial improvements: significantly higher data-rates (up to 14 Gb/s per pin with potential up to 16 Gb/s per pin), higher and more flexible chip capacities (4 Gb, 6 Gb, 8 Gb, 12 Gb and 16 Gb capacities are supported) and better energy efficiency thanks to lower supply and I/O voltage.
The first samples of GDDR5X memory chips fully leverage key architectural enhancements of the specification, including quad data rate (QDR) data signaling technology that doubles the amount of data transferred per cycle over the memory bus (compared to GDDR5) and allows it to use a wider 16n prefetch architecture, which enables up to 512 bit (64 Bytes) per array read or write access. However, the maximum data rates of Micron's sample chips are below tose initially advertised, possibly because of a conservative approach taken by Micron and its partners.
The addition of GDDR5X samples to Micron’s parts catalog has three important implications. First, the initial development of Micron’s GDDR5X memory chips is officially complete and the company has achieved its key goals (to increase performance of GDDR5X without increasing its power consumption). Second, one or more customers of Micron are already testing processors with GDDR5X memory controllers, which means that certain future GPUs from companies like AMD or NVIDIA do support GDDR5X and already exist in silicon. Third, the initial GDDR5X lineup from Micron will consist of moderately clocked ICs.
GPU Memory Math | ||||||||||
AMD Radeon R9 Fury X | AMD Radeon R9 290X |
NVIDIA GeForce GTX 980 Ti |
NVIDIA GeForce GTX 960 |
GDDR5X 256-bit interface |
GDDR5X 256-bit interface |
GDDR5X 128-bit interface |
GDDR5X 128-bit interface |
|||
Total Capacity | 4 GB | 4 GB | 6 GB | 2 GB | 8 GB | 4 GB | ||||
B/W Per Pin | 1 GB/s | 5 Gb/s | 7 Gb/s | 7 Gb/s | 12 Gb/s | 10 Gb/s | 12 Gb/s | 10 Gb/s | ||
Chip capacity | 8 Gb | 2 Gb | 4 Gb | 4 Gb | 8 Gb | |||||
No. Chips/Stacks | 4 | 16 | 12 | 4 | 8 | 4 | ||||
B/W Per Chip/Stack | 128 GB/s | 20 GB/s |
28 GB/s |
28 GB/s |
48 GB/s |
40 GB/s |
48 GB/s |
40 GB/s |
||
Bus Width | 4096-bit | 512-bit | 384-bit | 128-bit | 256-bit | 128-bit | ||||
Total B/W | 512 GB/s | 320 GB/s |
336 GB/s |
112 GB/s |
384 GB/s |
320 GB/s |
192 GB/s |
160 GB/s |
||
Estimated DRAM Power Consumption |
14.6 W | 30 W | 31.5 W | 10 W | 20 W | 10 W |
Thanks to GDDR5X memory chips with 10 Gb/s – 12 Gb/s data rates, developers of graphics cards will be able to increase peak bandwidth of 256-bit memory sub-systems to 320 GB/s – 384 GB/s. Which is an impressive achievement, because this amount of bandwidth is comparable to that of AMD’s Radeon R9 290/390 or NVIDIA’s GeForce GTX 980 Ti/Titan X graphics adapters. The latter use 512-bit and 384-bit memory interfaces, respectively, which are quite expensive and intricate to implement.
Micron originally promised to start sampling of its GDDR5X with customers in Q1 and the company has formally delivered on its promise. What now remains to be seen is when designers of GPUs plan to roll-out their GDDR5X supporting processors. Micron claims that it is set to start mass production of the new memory this summer, which hopefully means we're going to be seeing graphics cards featuring GDDR5X before the end of the year.
More information about GDDR5X memory:
Source: Micron
37 Comments
View All Comments
mgl888 - Tuesday, March 29, 2016 - link
Has latency improved significantly for DDR5?davidorti - Tuesday, March 29, 2016 - link
Hi, just an innocent question: Why didn't you include HBM in the "GPU Memory Math" table? They crush memory bandwith per watt http://www.anandtech.com/show/9390/the-amd-radeon-...bug77 - Tuesday, March 29, 2016 - link
And yet, the Fury X only uses 10W less than a 980Ti (Fury X only has 4GB VRAM, 980Ti has 6): http://www.techpowerup.com/reviews/Sapphire/R9_Fur...Don't get too hyped over HBM. It will replace GDDR in a few years, but right now it's just an engineering wonder (as in, no real-life benefits).
Drumsticks - Tuesday, March 29, 2016 - link
That isn't a comparison purely of HBM vs GDDR5 though. The architectural disadvantages of GCN in performance per watt vs Maxwell are pretty well known. HBM definitely provides some pretty reasonable power savings.Yojimbo - Tuesday, March 29, 2016 - link
Why confuse the issue by comparing two different GPU architectures? HBM has real-life benefits.bug77 - Wednesday, March 30, 2016 - link
Well, the video cards using HBM today don't use less power than something comparable and they don't really break performance records either. But yeah, they have real-world benefits </sarcasm>HBM does better in GB/s/W. In raw power draw it doesn't do so well: http://www.anandtech.com/show/9266/amd-hbm-deep-di...
As you can see it is estimated 4GB HBM will draw about half the power of 4GB GDDR5. But that's only a 15W reduction when the complete video card draws over 200W on average when gaming. It's still an improvement, but it's not earth-shattering. The added bandwidth is where the real advantage of HBM lies, but first we need GPUs that can put that bandwidth to good use. The current Fury X barely manages to inch ahead of 980Ti at 4k, without being actually playable at that resolution.
BurntMyBacon - Wednesday, March 30, 2016 - link
@bug77: "And yet, the Fury X only uses 10W less than a 980Ti (Fury X only has 4GB VRAM, 980Ti has 6):"The link you gave only compares full card power. AMD already stated that they used the extra power savings to push performance further on the chip itself. The fact that it comes in any lower in power at all is a minor miracle given that the Maxwell architecture is undeniable more power efficient (at least for gaming) than its GCN counterparts. Using your link as a point of reference see the 290X (I.E. 390X w/4GB ram). It is a smaller, lesser part also based on GCN, yet has higher overall power consumption with the same amount of ram. That's before you consider the sizable increase in memory bandwidth. I'd say there are real-life benefits to be had.
To your point, the fact that the largest estimated power draw on the chart is only 31.5W should tell you that you shouldn't expect miracles in overall power consumption. You can at most (theoretically, not realistically) drop what amounts to something like 10% of the cards power consumption. Cost and bandwidth will be larger considerations for the moment. With GDDR5X alleviating the bandwidth concerns for all but the highest end cards, cost is going to be the major consideration of the day. Eventually, the cost of HBM will come down (mid term), bandwidth needs will go up (long term), and you'll see HBM in larger chunks of the market. Your few years estimate probably isn't far off.
extide - Tuesday, March 29, 2016 - link
Probably because the table was pretty big as-is.DanNeely - Tuesday, March 29, 2016 - link
GDDR5X is the evolutionary step that is expected to quickly replace GDDR5 on everything from mid range cards to sub-flagship models. HBM/2 will remain limited to flagship cards and possibly compact/mobile models in the current generation; and will probably need a few years to drop sufficiently in cost to displace GDDR5X across the entire product stack. On the low end we'll probably see DDR4 replace DDR3 over the next year or so - the cost gap is nearly gone and because those GPUs tend to be bandwidth starved DDR4 should offer a nice boost - however I wouldn't expect to see any action there until after the higher end cards are out. It's possible HBM2 may eventually come to these cards as well - AMDs rumored plans to put HMB2 on future APUs suggests that they think costs will fall enough to make it possible - but sub-$100 cards are so low margin that the interposer would have to drop an order of magnitude in price first; adding $30ish component is out of the question on cards where the margin is only a dollar or two.Lolimaster - Tuesday, March 29, 2016 - link
I think sub $100 gpu's will disappear or more precisely, only "legacy/moba" for older machines will be sold below that range point, DDR4 should suffice.Even the most basic Kaveri APU features an igpu on the same class of $40-50 discrete one. If you want a new low cost machine will run off an APU, there's no point on buying a low end dgpu.