Conclusion

The system administrators with high-end, ultra mission critical applications will still look down their nose at the Intel X25-E Extreme SLC drive: it is not dual ported (SATA interface) and it does not have a "super capacitor" to allow the controller to write its 16MB cache to the flash array in the event of a sudden power outage. For those people, the Enterprise Flash Drives (EFD) of EMC make sense with capacities up to 400GB but prices ten times as high as the Intel X25-E SLC drive.

For the rest of us, probably 90% of the market, the Intel X25-E is nothing short of amazing: it offers at least 3 to 13 times better OLTP performance at less than a tenth of the power consumption of the classical SAS drives. We frankly see no reason any more to buy SAS or FC drives for performance critical OLTP databases unless the database sizes are really huge. From the moment you are using lots of spindles and most of your hard disks are empty, Intel's SLC SSDs make a lot more sense.

However, be aware that these ultra fast storage devices cause bottlenecks higher in the storage hierarchy. The current storage processors seem to have to trouble scaling well from four to eight drives. We have witnessed negative scaling only in some extreme cases, 100% random writes in RAID 5 for example. It is unlikely that you will witness this kind of behavior in the real world. Still, the trend is clear: scaling will be poor if you attach 16 or more SLC SSDs on products like the Adaptec 51645, 51645, and especially the 52445. Those RAID controllers allow you to attach up to 24 drives, but the available storage processor is the same as our Adaptec 5805 (IOP348 at 1.2GHz). We think it is best to attach no more than eight SLC drives per IOP348, especially if you are planning to use the more processor intensive RAID levels like RAID 5 and 6. Intel and others had better come up with faster storage processors soon, because these fast SLC drives make the limits of the current generation of storage processors painfully clear.

Our testing also shows that choosing the "cheaper but more SATA spindles" strategy only makes sense for applications that perform mostly sequential accesses. Once random access comes into play, you need two to three times more SATA drives - and there are limits to how far you can improve performance by adding spindles. Finally, to get the best performance out of your transactional applications, RAID 10 is still king, especially with the Intel X25-E.

References

[1] Dave Fellinger, "Architecting Storage for Petascale Clusters". http://www.ccs.ornl.gov/workshops/FallCreek07/presentations/fellinger.pdf

[2] US Department of Energy. Average retail price of electricity to ultimate customers by end-use sector, by state. http://www.eia.doe.gov/cneaf/electricity/epm/table5_6_a.html

Energy Consumption
Comments Locked

67 Comments

View All Comments

  • Rasterman - Monday, March 23, 2009 - link

    since the controller is the bottleneck for ssd and you have very fast cpus, did you try testing a full software raid array, just leave the controllers out of it all together?.
  • Snarks - Sunday, March 22, 2009 - link

    reading the comments made my brain asplode D:!

    Damn it, it's way to late for this!
  • pablo906 - Saturday, March 21, 2009 - link

    I've loved the stuff you put out for a long long time. This another piece of quality work. I definitely appreciate the work you put into this stuff. I was thinking about how I was going to build the storage back end for a small/medium virtualization platform and this is definitely swaying some of my previous ideas. It really seems like an EMC enclosure may be in our future instead of a something built by me on a 24 Port Areca Card.

    I don't know what all the hubub was about at the beginning of the article but I can tell you that I got what I needed. I'd like to see some follow ups in Server Storage and definitely more Raid 6 info. Any chance you can do some serious Raid Card testing, that enclosure you have is perfect for it (I've built some pretty serious storage solutions out of those and 24 port Areca cards) and I'd really like to see different cards and different configurations, numbers of drives, array types, etc. tested.
  • rbarone69 - Friday, March 20, 2009 - link

    Great work on these benchmarks. I have found very few other sources that provided me with the answers to my questions regarding exaclty what you tested here (DETAILED ENOUGH FOR ME). This report will be referenced when we size some of our smaller (~40-50GB but heavily read) central databases we run within our enterprise.

    It saddens me to see people that simply will NEVER be happy, no matter what you publish to them for no cost to them. Fanatics have their place but generally cost organizations much more than open minded employees willing to work with what they have available.
  • JohanAnandtech - Saturday, March 21, 2009 - link

    Thanks for your post. A "thumbs up" post like yours is the fuel that Tijl and I need to keep going :-). Defintely appreciated!



  • classy - Friday, March 20, 2009 - link

    Nice work and no question ssds are truly great performers, but I don't see them being mainstream for several more years in the enterprise world. One is no one knows how relaible they are? They are not tried and tested. Two and three go hand in hand, capapcity and cost. With the need for more and more storage, the cost for ssd makes them somewhat of a one trick pony, a lot of speed, but cost prohibitive. Just at our company we are looking at a seperate data domain just for storage. When you start tallking the need for several terabytes, ssd just isn't going to be considered. Its the future, but until they drastically reduce in cost and increase in capacity, their adoption will be minimal at best. I don't think speed right now trumps capacity in the enterprise world.
  • virtualgeek - Friday, March 27, 2009 - link

    They are well past being "untried" in the enterprise - and we are now shipping 400GB SLC drives.
  • gwolfman - Friday, March 20, 2009 - link

    [quote]Our Adaptec controller is clearly not taking full advantage of the SLC SSD's bandwidth: we only see a very small improvement going from four to eight disks. We assume that this is a SATA related issue, as eight SAS disks have no trouble reaching almost 1GB/s. This is the first sign of a RAID controller bottleneck.[/quote]
    I have an Adaptec 3805 (previous generation as to the one you used) that I used to test 4 of OCZ's first SSDs when they came out and I noticed this same issue as well. I went through a lengthy support ticket cycle and got little help and no answer to the explanation. I was left thinking it was the firmware as 2 SAS drives had a higher throughput than the 4 SSDs.
  • supremelaw - Friday, March 20, 2009 - link

    For the sake of scientific inquiry primarily, but not exclusively,
    another experimental "permutation" I would also like to see is
    a comparison of:

    (1) 1 x8 hardware RAID controller in a PCI-E 2.0 x16 slot

    (2) 1 x8 hardware RAID controller in a PCI-E 1.0 x16 slot

    (3) 2 x4 hardware RAID controllers in a PCI-E 2.0 x16 slot

    (4) 2 x4 hardware RAID controllers in a PCI-E 1.0 x16 slot

    (5) 2 x4 hardware RAID controllers in a PCI-E 2.0 x4 slot

    (6) 2 x4 hardware RAID controllers in a PCI-E 1.0 x4 slot

    (7) 4 x1 hardware RAID controllers in a PCI-E 2.0 x1 slot

    (8) 4 x1 hardware RAID controllers in a PCI-E 1.0 x1 slot


    * if x1 hardware RAID controllers are not available,
    then substitute x1 software RAID controllers instead,
    to complete the experimental matrix.


    If the controllers are confirmed to be the bottlenecks
    for certain benchmarks, the presence of multiple I/O
    processors -- all other things being more or less equal --
    should tell us that IOPs generally need more horsepower,
    particularly when solid-state storage is being tested.

    Another limitation to face is that x1 PCI-E RAID controllers
    may not work in multiples installed in the same motherboard
    e.g. see Highpoint's product here:

    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...


    Now, add different motherboards to the experimental matrix
    above, because different chipsets are known to allocate
    fewer PCI-E lanes even though slots have mechanically more lanes
    e.g. only x4 lanes actually assigned to an x16 PCI-E slot.


    MRFS


  • supremelaw - Friday, March 20, 2009 - link

    More complete experimental matrix (see shorter matrix above):

    (1) 1 x8 hardware RAID controller in a PCI-E 2.0 x16 slot

    (2) 1 x8 hardware RAID controller in a PCI-E 1.0 x16 slot

    (3) 2 x4 hardware RAID controllers in a PCI-E 2.0 x16 slot

    (4) 2 x4 hardware RAID controllers in a PCI-E 1.0 x16 slot

    (5) 1 x8 hardware RAID controllers in a PCI-E 2.0 x8 slot

    (6) 1 x8 hardware RAID controllers in a PCI-E 1.0 x8 slot

    (7) 2 x4 hardware RAID controllers in a PCI-E 2.0 x8 slot

    (8) 2 x4 hardware RAID controllers in a PCI-E 1.0 x8 slot

    (9) 2 x4 hardware RAID controllers in a PCI-E 2.0 x4 slot

    (10) 2 x4 hardware RAID controllers in a PCI-E 1.0 x4 slot

    (11) 4 x1 hardware RAID controllers in a PCI-E 2.0 x1 slot

    (12) 4 x1 hardware RAID controllers in a PCI-E 1.0 x1 slot


    MRFS

Log in

Don't have an account? Sign up now