While it happens a lot less now than a couple of years ago, I still see the question of why SSDs are worth it every now and then. Rather than give my usual answer, I put together a little graph to illustrate why SSDs are both necessary and incredibly important.

Along the x-axis we have different types of storage in a modern computer. They range from the smallest, fastest storage elements (cache) to main memory and ultimately at the other end of the spectrum we have mechanical storage (your hard drive). The blue portion of the graph indicates typical capacity of these storage structures (e.g. 1024KB L2, 1TB HDD, etc...). The further to the right you go, the larger the structure happens to be.

The red portion of the graph lists performance as a function of access latency. The further right you go, the slower the storage medium becomes.

This is a logarithmic scale so we can actually see what’s going on. While capacity transitions relatively smoothly as you move left to right, look at what happens to performance. The move from main memory to mechanical storage occurs comes with a steep performance falloff.

We could address this issue by increasing the amount of DRAM in a system. However, DRAM prices are still too high to justify sticking 32 - 64GB of memory in a desktop or notebook. And when we can finally afford that, the applications we'll want to run will just be that much bigger.

Another option would be to improve the performance of mechanical drives. But we’re bound by physics there. Spinning platters at more than 10,000 RPM proves to be power, sound and reliability prohibitive. The majority of hard drives still spin at 7200 RPM or less.

Instead, the obvious solution is to stick another level in the memory hierarchy. Just as AMD/Intel have almost fully embraced the idea of a Level 3 cache in their desktop/notebook processors, the storage industry has been working towards using NAND as an intermediary between DRAM and mechanical storage. Let’s look at the same graph if we stick a Solid State Drive (SSD) in there:

Not only have we smoothed out the capacity curve, but we’ve also addressed that sharp falloff in performance. Those of you who read our most recent VelociRaptor VR200M review will remember that we recommend a fast SSD for your OS/applications, and a large HDD for games, media and other large data storage. The role of the SSD in the memory hierarchy today is unfortunately user-managed. You have to manually decide what goes on your NAND vs. mechanical storage, but we’re going to see some solutions later this year that hope to make some of that decision for you.

Why does this matter? If left unchecked, sharp dropoffs in performance in the memory/storage hierarchy can result in poor performance scaling. If your CPU doubles in peak performance, but it has to wait for data the majority of the time, you’ll rarely realize that performance increase. In essence, the transistors that gave your CPU its performance boost will have been wasted die area and power.

Thankfully we tend to see new levels in the memory/storage hierarchy injected preemptively. We’re not yet at the point where all performance is bound by mass storage, but as applications like virtualization become even more prevalent the I/O bottleneck is only going to get worse.

Motivation for the Addiction

It’s this sharp falloff in performance between main memory and mass storage that makes SSDs so enticing. I’ve gone much deeper into how these things work already, so if you’re curious I’d suggest reading our SSD Relapse.

SSD performance is basically determined by three factors: 1) NAND, 2) firmware and 3) controller. The first point is obvious; SLC is faster (and more expensive) than MLC, but is limited to server use mostly. Firmware is very important to SSD performance. Much of how an SSD behaves is determined by the firmware. It handles all data mapping to flash, how to properly manage the data that’s written on the drive and ensures that the SSD is always operating as fast as possible. The controller is actually less important than you’d think. It’s really a combination of the firmware and controller that help determine whether or not an SSD is good.

For those of you who haven’t been paying attention, we basically have six major controller manufacturers competing today: Indilinx, Intel, Micron, Samsung, SandForce and Toshiba. Micron uses a Marvell controller, and Toshiba has partnered up with JMicron on some of its latest designs.

Of that list, the highest performing SSDs come from Indilinx, Intel, Micron and SandForce. Micron makes the only 6Gbps controller, while the rest are strictly 3Gbps. Intel is the only manufacturer on our shortlist that we’ve been covering for a while. The rest of the companies are relative newcomers to the high end SSD market. Micron just recently shipped its first competitive SSD, the RealSSD C300 as did SandForce.

We first met Indilinx a little over a year ago when OCZ introduced a brand new drive called the Vertex. While it didn’t wow us with its performance, OCZ’s Vertex seemed to have the beginnings of a decent alternative to Intel’s X25-M. Over time the Vertex and other Indilinx drives got better, eventually earning the title of Intel alternative. You wouldn’t get the same random IO performance, but you’d get better sequential performance and better pricing.

Several months later OCZ introduced another Indilinx based drive called Agility. Using the same Indilinx Barefoot controller as the Vertex, the only difference was Agility used 50nm Intel or 40nm Toshiba NAND. In some cases this resulted in lower performance than Vertex, while in others we actually saw it pull ahead.

OCZ released many other derivatives based on Indilinx’s controller. We saw the Vertex EX which used SLC NAND for enterprise customers, as well as the Agility EX. Eventually as more manufacturers started releasing Indilinx based drives, OCZ attempted to differentiate by releasing the Vertex Turbo. The Vertex Turbo used an OCZ exclusive version of the Indilinx firmware that ran the controller and external DRAM at a higher frequency.

Despite a close partnership with Indilinx, earlier this month OCZ announced that its next generation Vertex 2 and Agility 2 drives would not use Indilinx controllers. They’d instead be SandForce based.

OCZ's Agility 2 and the SF-1200
Comments Locked


View All Comments

  • 529th - Thursday, April 22, 2010 - link

    Some people are curious about the Vertex LE 50g version. Yes, I've read the Vertex LE 100g review :)

    50g Vertex LE: http://www.newegg.com/Product/Product.aspx?Item=N8...

    New Egg's description says "For enthusiasts w/ up to 15,000 4KB random write IOPS" which would suggest the controller is the SF 1200 where as the Vertex LE SSD drives are suppose to have the SF 1500 controller which will do ~ 30,000 4kb so the inconsistency brings up curiosity. To make matters worse, The OCZ website says they use the SF 1500 controller. Which I vaguely recall someone saying they asked OWC which controller they were using for their OWC Mercury Extreme SSD drives and OWC's response was that they didn't know....
  • willscary - Thursday, April 22, 2010 - link

    Originally, I was told by Customer Service that he "did not know", but after pressing for an answer and a breif period on hold, he returned and told me that all current and future Mercury Extreme SSDs would utilize the Sandforce 1200 controller.

    I was very angry at this "bait and switch" and returned my SSDs. Actually, they were in transit and I had OWC recall them. I have not yet received credit for them although they returned to OWC early Tuesday morning.

    I will also say that OWC did send confirmation of the returns and said that the credit would be processed by the end of the week, so all is not bad.

    I was asked "what the big deal was" on another thread. The way I see it, it would be like ordering an expensive sports car with a V6 and having it arrive with a turbocharged 4 cylinder. The performance may be the same, but there would then be the possibility of added maintenance costs, lesser reliability and a shorter lifespan. Add to that that the dealer would tell me that even though the smaller turbo option was $1,000 less than the V6, I should pay the same because performance really would be nearly identical.

    Just my thoughts.
  • DanNeely - Thursday, April 22, 2010 - link

    15k IOPs is higher than the SF1200 supports with a normal firmware. More likely I think would be something similar to intels 40GB drive where only half the controllers flash chip ports are filled and a full speed SF controller.
  • poeticjustic - Thursday, April 22, 2010 - link

    That was a really helpful and thorough article. Thank you for all this info.
    a few things that i wanted to ask.

    -On the random read/write speed page and specifically on the 4k aligned random write, we don't see anywhere the intel x25-E performance. Obviously that is because the testing of the x25-E was performed in the past, when this kind of test wasn't performed. Is it safe to assume that the performance of the x25-E would be quite close to that of the 4k random write test (around 48MB/s)?? Since we see that mostly the new controllers are mostly affected.

    -Furthermore, at least from what i've seen on eshops around in my country, the price of the z-drive m84 250GB and 500Gb has come closer to that of sata ssds. Still they are more expensive of course, but wouldn't it be a good idea to start seeing some z-drive performance on those tables for a direct comparison with the ssds and see whether the difference in their performance is bigger than the difference in their price? Just a thought.

    Making an extra remark on the performance of SF controller during already compressed data plus the the random data perormance table was a pretty important addition and something we should pay attention to.

    Thank you once again for a well built article.
  • eaw999 - Thursday, April 22, 2010 - link

    correct, the x25-e, like -m, isn't affected drastically by alignment. you can expect slightly higher numbers with alignment, but nothing jaw-dropping. on the other hand, the x25-e positively rips at random writes at high queue depths, but that's not something you're likely to see often in desktop usage.
  • krazyderek - Thursday, April 22, 2010 - link

    When does this ever happen? isn't that sentence an oxymorron, the only thing i can think of it if you were installing several applications at the same time? and it would depend on the applications being installed too, since i think i remember hearing that games are very sequential now
  • krazyderek - Thursday, April 22, 2010 - link

    I DO see the importance when dealing with pre-compressed files like pictures and videos, i agree it would be nice to see a 0% 50% and 100% compressed figures to give people a good overview of things, but still, when would you see highly random sequential data?
  • zdzichu - Thursday, April 22, 2010 - link

    Sequentially speaks about data access pattern, highly random is about data itself. It is perfect description of writing movie to a disk - you are storing byte-by-byte and each byte is probably different that preceeding ones.
  • zdzichu - Thursday, April 22, 2010 - link

    I think you can skip non-aligned test in future. It is a corner case, not interesting at all.
  • PubicTheHare - Thursday, April 22, 2010 - link

    I'm pretty sure we're still 18-24 months away from having SSDs priced at a level that the average SSD-seeker is willing to spend.

    There will probably be an attractive price differential between what is available now and what price level the same will be available for in 12 months, but I really think it'll take a bit over a year to start seeing "attractive" pricing.

    None of this stuff matters to me until Apple supports TRIM or garbage collection (I believe this is "OS-agnostic" TRIM, right?) comes to the drives with the best firmware and price/usable GB.

    I just want a 256-300 GB SSD that I can leave 15% unpartitioned and throw into a MBP. I want it to scream.

Log in

Don't have an account? Sign up now