Conclusion

The system administrators with high-end, ultra mission critical applications will still look down their nose at the Intel X25-E Extreme SLC drive: it is not dual ported (SATA interface) and it does not have a "super capacitor" to allow the controller to write its 16MB cache to the flash array in the event of a sudden power outage. For those people, the Enterprise Flash Drives (EFD) of EMC make sense with capacities up to 400GB but prices ten times as high as the Intel X25-E SLC drive.

For the rest of us, probably 90% of the market, the Intel X25-E is nothing short of amazing: it offers at least 3 to 13 times better OLTP performance at less than a tenth of the power consumption of the classical SAS drives. We frankly see no reason any more to buy SAS or FC drives for performance critical OLTP databases unless the database sizes are really huge. From the moment you are using lots of spindles and most of your hard disks are empty, Intel's SLC SSDs make a lot more sense.

However, be aware that these ultra fast storage devices cause bottlenecks higher in the storage hierarchy. The current storage processors seem to have to trouble scaling well from four to eight drives. We have witnessed negative scaling only in some extreme cases, 100% random writes in RAID 5 for example. It is unlikely that you will witness this kind of behavior in the real world. Still, the trend is clear: scaling will be poor if you attach 16 or more SLC SSDs on products like the Adaptec 51645, 51645, and especially the 52445. Those RAID controllers allow you to attach up to 24 drives, but the available storage processor is the same as our Adaptec 5805 (IOP348 at 1.2GHz). We think it is best to attach no more than eight SLC drives per IOP348, especially if you are planning to use the more processor intensive RAID levels like RAID 5 and 6. Intel and others had better come up with faster storage processors soon, because these fast SLC drives make the limits of the current generation of storage processors painfully clear.

Our testing also shows that choosing the "cheaper but more SATA spindles" strategy only makes sense for applications that perform mostly sequential accesses. Once random access comes into play, you need two to three times more SATA drives - and there are limits to how far you can improve performance by adding spindles. Finally, to get the best performance out of your transactional applications, RAID 10 is still king, especially with the Intel X25-E.

References

[1] Dave Fellinger, "Architecting Storage for Petascale Clusters". http://www.ccs.ornl.gov/workshops/FallCreek07/presentations/fellinger.pdf

[2] US Department of Energy. Average retail price of electricity to ultimate customers by end-use sector, by state. http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html

Energy Consumption
Comments Locked

67 Comments

View All Comments

  • marraco - Wednesday, March 25, 2009 - link

    The comparison is not fair, but can be fairer:

    If the RAID of SATA/SAS disks is restricted to the same storage capacity than the SSD, limiting the partition to the fastest external tracks/cilynders, the latency is significantly reduced, and average read/write speed is significantly increased, so

    PLEASE, PLEASE, PLEASE

    Repeat the benchmarcks, but with short stroking for magnetic disks.
  • JohanAnandtech - Friday, March 27, 2009 - link

    May I ask what the difference with the fact that we created a relatively small partition across our RAID-5 raidset? Also, you can imagine that our 23 GB database was at the outer tracks of the disks. I have to verify, but that seems logical.

    This kind of testing should give the same effects as short stroking. I personally think Short stroking can not be good for your actuator, while a small partition should be no problem.
  • marraco - Friday, March 27, 2009 - link

    See this link.
    http://www.tomshardware.com/reviews/short-stroking...">http://www.tomshardware.com/reviews/short-stroking...

    Clearly, you results are orders of magnitude than those showed on that benchmark.

    As I understand, short stroking increase actuator health, because reduces physical acceleration on the actuator.

    Anything necessary, is to use a small partition on the fastest external track.

    you utilized a raid 0 of 16 disks, with less than 1000 gb/second.

    On Tomshardware, a raid of only 4 disk achieved average (not maximun) 1400 to 1600 Mb/s. (of course, the test are not the same; for that reason, I ask for new test)

    About the RAID 5: I would love to see RAID 0.

    I are interesed on comparing a fast SSD as the intels, (or OCZ Vostro/Summit), with what can be achieved at the same cost, with magnetic media, if the partition size is restricted to the same total capacity than the SSD.

    Anyway, thanks for the article. Good work.

    So good, I want to see more :)
  • marraco - Sunday, April 5, 2009 - link

    Please, tell me you are preparing such article :)
  • JohanAnandtech - Tuesday, April 7, 2009 - link

    We are investigating the issue. I like to have some second opinions before I start heavy benchmarking on THG article. They tend to be sensational...
  • araczynski - Wednesday, March 25, 2009 - link

    wow, color me impressed. all the more reason to upgrade everything to gigabit and fiber.
  • BailoutBenny - Tuesday, March 24, 2009 - link

    Can we get any updates on the future of chalcogenide glass (phase change) based drive technologies? IBM's Millipede and other MEMS probe storage devices? Any word about Intel and STMicroelectronics' shipments of PRAM samples to customers that happened last year? What do the rumor mills say? Are these technologies proving viable? It is difficult to formulate a coherent picture for these technologies without being an industry insider.
  • Black Jacque - Tuesday, March 24, 2009 - link

    RAID 5 in Action

    ... However, it is rarely if ever used for any serious application.

    You are obviously not a SAN Admin or know too much about enterprise level storage.

    RAID 5 is the mainstay of block-level storage systems by companies like EMC.

    In addition, the article mentions STEC EFDs used by EMC. On the EMC CLARiiON line, those EFDs are provisioned in RAID 5 groups.


  • spikespiegal - Wednesday, March 25, 2009 - link

    [quote]RAID 5 is the mainstay of block-level storage systems by companies like EMC. [/quote]

    Which thus explains why in this day in age I see so many SANs blowing entire volumes and costing days of restoration when the room temp gets a few degrees above ambient.

    Corrupted RAID 5 arrays have cost me more lost enterprise data than all the non-RAID client side disks I've ever replaced; iSeries, all brands of x386, etc. EMC has a great script to account for this in which they always blame the drives first, then only when cornered by an enraged CIO will they admit it's their controllers. Been there...done that...for over a decade in many different industries.

    If you haven't been burned by RAID 5, or dare claim a drive controller in RAID 5 mode has a better MTBF than the drives it's hosting, then it's time to quite your day job at the call center in India. RAID 5 saves you the cost of one drive every four, which was logical in 1998 but not today. At least span across multiple redundant controllers in RAID 10 or something....
  • JohanAnandtech - Tuesday, March 24, 2009 - link

    I fear you misread that sentence:

    "RAID 0 is good way to see how adding more disks scales up your writing and reading performance. However, it is rarely if ever used for any serious application."

    So we are talking about RAID-0 not RAID-5.
    http://it.anandtech.com/IT/showdoc.aspx?i=3532&...">http://it.anandtech.com/IT/showdoc.aspx?i=3532&...

Log in

Don't have an account? Sign up now