The SSD Diaries: Crucial's RealSSD C300
by Anand Lal Shimpi on July 13, 2010 12:39 AM ESTAnandTech Storage Bench
The first in our benchmark suite is a light usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.
There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.
The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.
The performance results are reported in average I/O Operations per Second (IOPS):
In our custom test suite we can finally put to rest concerns about the real world behavior of SandForce’s drives. Our light test case makes it pretty clear: in practice the Vertex 2 performs within 3% of the 256GB C300, and is a good 20% faster than the Intel X25-M G2. Keep in mind that we’re looking at raw IO performance and not total system performance here.
If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.
The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.
Our heavy downloading workload involves more heavily compressed files and thus we see the Vertex 2 perform like a 128GB C300. The 256GB C300 just can't be touched.
The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.
Our gaming workload has been SATA bound for the high end SSDs for some time now. If you happen to be baller enough to play games off of an SSD, any of these high end drives will perform the same. Although our custom tests don't work with the 6Gbps Marvell controller, the boost to sequential read speed should be quite visible in gaming workloads with the C300.
51 Comments
View All Comments
fgmg2 - Tuesday, July 13, 2010 - link
I know that I could flip back and forth between the various charts to calculate the performance per a watt, but it be great to get a consolidated chart that graphed the drives based upon write (read?) performance per a watt.Additionally analyzing drives purely based upon their write/read performance and/or purely based upon their power consumption seems a bit meaningless. It should be very easy to make a drive consumes almost no power but writes slower than a 3 1/2" floppy. Especially as you see some drives perform more than twice as fast as others.
Just a suggestion.
P.S. It might not be a bad idea to do the same for your other reviews, such as video cards and CPUs.
7Enigma - Tuesday, July 13, 2010 - link
Agreed. In most reviews (video cards for example) performance per watt is somewhat less important as normally you'll be modeling or gaming and the power draw is going to be pretty stable.But for a hard drive, especially when many of these will be notebook replacements, it is very important. I have an Intel G2 80GB and love it, and when I look at the power consumption numbers it looks better than the C300. But I'm not naive to the fact that (when TRIM'd) the C300 crushes the G2 in pretty much every benchmark. What that tells me is that in a real-world result the C300 would use more power but get the job done in a shorter time and since we aren't spinning up a traditional platter, the HD behaves very much like a modern Intel CPU and would go idle.
That hurry up and get idle I think would skew those power consumption charts heavily.
What I would design would be a benchmark that has a set instruction set (write 2GB of random, 2 GB of sequential, read 10GB of data), and then measure the TOTAL power of the drive consumed during that time. Then report that total power number and use it for future reviews (a static number to rank similar to a PCMark or Vantage score)
Ernestds - Tuesday, July 13, 2010 - link
I think the best way to calculate efficiency is measuring the total energy used by the drive doing the "Typical Workload" test. Of course, if Anand could do the same with the "Heavy Downloading Workload" it would be great. IMO there is no need for the "Gaming Workload" though, usually who looks for the power graphs is aiming the improvness(does this word exist?) of a notebook's battery life, and frequently who cares about that, do not game on battery.Just a question to Anand, do you feel difference between two SSDs, say the Nova one and the Crucial?
Keep on the great work!
MrSpadge - Tuesday, July 13, 2010 - link
It's kind of funny to see someone asking for power efficiency for something which is probably the most power efficient thing in the entire PC, especially if you compare it to HDDs. I understand it's interesting and maybe even important for laptops, though.I'd rather be interested in more detailed power draw and/or efficiency analysis in case where the power draw really hurts: GPUs and to a lesser extent CPU. For example: how does the power draw of a GF100 improve under load if you watercool it? Sure, not very relevant for most users.. but the difference should be surprisingly large.
shin0bi272 - Tuesday, July 13, 2010 - link
If you want best overall performance go sandforce. The only real large advantage the crucial drive had was in read performance. sysmark et al were within a few hundred points of one another. So the overall feel of the system will be identical till you either do some heavy read ops or conversely you fill the drive and dont trim it. With the prices being about the same for the 100 vs 128 the better performer is the sandforce drives.Techdad - Tuesday, July 13, 2010 - link
Really? You'd take the performance tricks and the risk of real data that doesn't fit SandFarce's fancy algorithms over the straight honest performance of the Crucial drive? That's odd.I've had my Crucial drive since it came out and it's been great. In spite of Anand's corner-case bashing the first version firmware has been rock solid. Debating even if I want to bother with the firmware upgrade, but I'll probably do it.
bji - Tuesday, July 13, 2010 - link
Relying on TRIM and optimizing for the least stressful case is also a "performance trick", so your implication that Sandforce uses such tricks and Micron does not is wrong. Also your childish pun on the Sandforce name shows alot about where you are coming from.The Sandforce and the Micron drives look to have very similar performance in the vast majority of cases, so shin0bi272 is spot-on. And the increased cost of the Crucial drive would seem to be the deciding factor for me.
But you can't go wrong with either offering it would seem, so pick whichever you like best. As for myself, I would pick Sandforce, only because of my extreme aversion to any chance of degraded drive performance, having been bitten by stuttering of early drives. Not saying that the Crucial drive is anything like a JMicron, but I personally value the resiliency of the Sandforce controller very highly, and would pay some peak performance cost happily for the guarantee of better worst-case performance.
Not everyone will, or should, have the same opinion on this; those less averse to the risks of worst-case performance degredation would be well served by the Crucial drive.
hotlips69 - Tuesday, July 13, 2010 - link
Having read this review, I'm considering buying one of the 100GB "OCZ Vertex 2" drives used in this article, but I'm not sure exactly which is the correct drive model as there seem to be numerous "Vertex 2" drives on the OCZ website!!Is it the Pro Series or EX Series or just the standard Vertex 2 series???
hotlips69 - Tuesday, July 13, 2010 - link
....also, why is it listed as 120GB in the chart on page 1 of the article?Anand Lal Shimpi - Tuesday, July 13, 2010 - link
A standard Vertex 2 120GB drive is all you need. The 100GB capacities will probably be phased out by most SandForce partners over time as there's no tangible performance benefit for desktop workloads.I just used the 100GB data we had in the engine which is why it appears as such in the charts.
Take care,
Anand