Update: Random write performance of the drive we reviewed may change with future firmware updates. Click here to find out more.

Corsair’s late entry into the SSD market meant that it missed the JMicron mess of the early days, but it also meant that Corsair missed the bulk of the early Indilinx boat. Not interested in making the same mistake twice, Corsair took the same risk as many other SSD makers and got in bed with a little company called SandForce.

Widely believed to be the creator of the technology behind Seagate’s first SSD, SandForce has been popping up all over the place lately. We first encountered the company late last year with a preview of OCZ’s Vertex 2 Pro. SandForce's technology seemed promising.

The problem of maintaining SSD performance is a lot like keeping a room tidy and clean. If you make sure to put things in the right place the first time and don’t let dirt accumulate, you’ll end up with an organized, pristine looking room. However if you just throw your stuff around and let stains go untouched, you’ll spend a lot more time looking for things and probably end up ruining the place.

The same holds true for SSDs. If the controller doesn’t properly place data it’ll take longer to place new data. And if the controller doesn’t properly wear level, you’ll end up reducing the life of the drive.

I’ve explained the how behind all of this countless times before, so I’ll spare you the details here. Needless to say, it’s a juggling act. One that requires a fast enough controller, a large amount of fast storage (whether it is on-die cache or off-chip DRAM) and a good algorithm for managing all the data that gets thrown at it.

At a high level Crucial/Micron, Indilinx and Intel take a relatively similar approach to the problem. They do the best with the data they’re given. Some do better than others, but they ultimately take the data you write to the drive and try to make the best decisions as to where to put it.

SandForce takes a different approach. Instead of worrying about where to place a lot of data, it looks at ways to reduce the amount of data being written. Using a combination of techniques akin to lossless data compression and data deduplication, SandForce’s controllers attempt to write less to the NAND than their competitors. By writing less, the amount of management and juggling you have to do goes down tremendously. SandForce calls this its DuraWrite technology.

DuraWrite isn’t perfect. If you write a lot of highly compressed or purely random data, the algorithms won’t be able to do much to reduce the amount of data you write. For most desktop uses, this shouldn’t be a problem however.

Despite the obvious achilles’ heel, SandForce’s technology was originally designed for use in the enterprise market. This lends credibility to the theory that SandForce was Seagate’s partner of choice for Pulsar. With enterprise roots, SandForce’s controllers and firmware are designed to support larger-than-normal amounts of spare area. As you may remember from our earlier articles, there’s a correlation between the amount of spare area you give a dynamic controller and overall performance. You obviously lose usable capacity, but it helps keep performance high. SandForce indicates that eventually we’ll see cheaper consumer drives with less NAND set aside as spare area, but for now a 128GB SandForce drive only gives you around 93GB of actual storage space.

Introducing the SF-1200

The long winded recap brings us to our new friend. The Vertex 2 Pro I previewed last year used a full fledged SF-1500 implementation, complete with ridiculously expensive supercap on board. SandForce indicated that the SF-1200 would be more reasonably priced, at the expense of a performance hit. In between the two was what we got with OCZ’s Vertex Limited Edition. OCZ scored a limited supply of early controllers that didn’t have the full SF-1500 feature set, but were supposedly better than the SF-1200.

Today we have Corsair’s Force drive, its new performance flagship based on the SF-1200. Here’s what SandForce lists as the differences between the SF-1500 and SF-1200:

SandForce Controller Comparison
  SF-1200 SF-1500
Flash Memory Support MLC MLC or SLC
Power Consumption 550 mW (typical) 950 mW (typical)
Sequential Read/Write Performance (128KB) 260 MB/s 260 MB/s
Random Read/Write Performance (4K) 30K/10K IOPS 30K/30K IOPS
Security 128-bit AES Data Encryption, Optional Disk Password 128-bit AES Data Encryption, User Selectable Encryption Key
Unrecoverable Read Errors Less than 1 sector per 1016 bits read Less than 1 sector per 1017 bits read
MTTF 2,000,000 operating hours 10,000,000 operating hours
Reliability 5 year customer life cycle 5 year enterprise life cycle

 

The Mean Time To Failure numbers are absurd. We’re talking about the difference between 228 years and over 1100 years. I’d say any number that outlasts the potential mean time to failure of our current society is pretty worthless. Both the SF-1200 and SF-1500 are rated for 5 year useful lifespans, the difference is that SandForce says the SF-1200 can last for 5 years under a "customer" workload vs. an enterprise workload for the SF-1500. Translation? The SF-1500 can handle workloads with more random writes for longer.

The SF-1500 also appears to be less error prone, but that’s difficult to quantify in terms of real world reliability. The chip sizes are identical, although the SF-1500 draws considerably more power. If I had to guess I’d say the two chips are probably the same with the differences amounting to be mostly firmware, binning and perhaps fusing off some internal blocks. Maintaining multiple die masks is an expensive task, not something a relative newcomer would want to do.


Note the lack of any external DRAM. Writing less means tracking less, which means no external DRAM is necessary.

Regardless of the difference, the SF-1200 is what Corsair settled on for the Force. Designed to be a high end consumer drive, the Force carries a high end price. Despite it’s 100GB capacity there’s actually 128GB of NAND on the drive, the extra is simply used as spare area for block recycling by the controller. If we look at cost per actual GB on the drive, the Force doesn’t look half bad:

SandForce Controller Comparison
Drive NAND Capacity User Capacity Drive Cost Cost per GB of NAND Cost per Usable GB
Corsair Force 128GB 93.1GB $410 $3.203 $4.403
Corsair Nova 128GB 119.2GB $369 $2.882 $3.096
Crucial RealSSD C300 256GB 238.4GB $680 $2.656 $2.852
Intel X25-M G2 160GB 149.0GB $489 $3.056 $3.282
OCZ Vertex LE 128GB 93.1GB $394 $3.078 $4.232

 

But looking at cost per user addressable GB isn’t quite as pretty. It’s a full $1.12 more per GB than Intel’s X25-M G2. It's also a bit more expensive than OCZ's Vertex LE, although things could change once Corsair starts shipping more of these drives.

Power - A Telling Story
Comments Locked

63 Comments

View All Comments

  • JohnQ118 - Thursday, April 15, 2010 - link

    Just in case if you are using IE8 - open the Print view; then simply from the View menu select Style - No Style.
    You will get some small margins. Then adjust the window size as comfortable for reading.
  • remosito - Wednesday, April 14, 2010 - link

    Hi there,
    thanks for the great review. I couldn't find from the article what kind of data you are writing
    for the random 4k read/write tests. Those random write numbers look stellar.

    Which might have to do with the data being written being not very random at all and allowing for big gain coming from the sandforce voodoo/magicsauce/compression???
  • Mr Alpha - Wednesday, April 14, 2010 - link

    I believe the build of IOMeter he uses writes randomized data.
  • shawkie - Wednesday, April 14, 2010 - link

    This is a very important question - nobody is interested in how quickly they can write zeroes to their drive. If these benchmarks are really writing completely random data (which by definition cannot be compressed at all) then where does all this performance come from? It seems to me that we have a serious problem benchmarking this drive. If the bandwidth of the NAND were the only limiting factor (rather than the SATA interface or the processing power of the controller) then the speed of this drive should be anything from roughly the same as a similar competitor (for completely random data) to maybe 100x faster (for zeroes). So to get any kind of useful number you have to decide exactly what type of data you are going to use (which makes it all a bit subjective). In fact, there's another consideration. Note that the spare NAND capacity made available by the compression is not available to the user. That means the controller is probably using it to augment the reserved NAND. This means that a drive that has been "dirtied" with lots of nice compressable data will perform as though it has a massive amount of reserved NAND whereas a drive that has been "dirtied" with lots of random data will perform much worse.
  • nafhan - Wednesday, April 14, 2010 - link

    My understanding is that completely random and uncompressible are not the same thing. An uncompressible data set would need to be small and carefully constructed to avoid repetition. A random data set by definition is random, and therefore almost certain to contain repetitions over a large enough data set.
  • jagerman42 - Wednesday, April 14, 2010 - link

    No; given a random sequence of 0/1 bits with equal probability of each, the expected number of bits to encode the stream (i.e. on average--you could, through extremely unlikely outcome, have a compressible random sequence: e.g. a stream of 1 million 0's is highly compressible, but also extremely unlikely, at 2^(-1,000,000) probability of occurrence).

    So onwards to the entropy bits required calculation: H = -0.5*log2(0.5) -0.5*log2(0.5) = -0.5*(-1) -0.5*(-1) = 1.

    In other words, a random, equal-probability stream of bits can't be compressed at a rate better than 1 bit per bit.

    Of course, this only holds for an infinite, continuous stream; as you shorten the length of the data, the probability of the data being compressible increases, at least slightly--but even 1KB is 8192 bits, so compressibility is *hard*.

    Just for example's sake, I generated a few (10 bytes to 10MB) random data files, and compressed using gzip and bzip2: in every case (I repeated several times) the compressed version ended up larger than the original.

    For more info on this (it's called the Shannon theory, I believe, or also "Shannon entropy" according to the following), see: http://en.wikipedia.org/wiki/Entropy_(information_...
  • shawkie - Wednesday, April 14, 2010 - link

    I'm also not convinced by the way Anand has arrived at a compression factor of 2:1 based on the power consumption. The specification for the controller and Anand's own measurements show that about 0.57W of power is being used just by the controller. That only leaves 0.68W for writing data to NAND. Compare that with 2.49W for the Intel drive and you end up with a compression factor of more like 4:1. But actually this calculation is still a long way out because 2MB/s sequential writes are 250MB/s on the SandForce and only 100MB/s on the Intel. So we've written 2.5x as much (uncompressed) data using 1/4 as much NAND power consumption. So the compression factor is actually more like 10:1. I think that pretty much proves we're dealing with very highly compressable data.
  • HammerDB - Wednesday, April 14, 2010 - link

    That should definitely be checked, as this is the first drive where different kinds of data will perform differently. Due to the extremely high aligned random write performance, I suspect that the data written is either compressible or repeated, so the drive manages to either compress or deduplicate to a large degree.

    One other point regarding the IOMeter tests: the random reads perform almost identical to the unaligned random writes. Would it be possible to test both unaligned and aligned random reads, in order to find out if the drive is also capable of faster random reads under specific circumstances?
  • Anand Lal Shimpi - Wednesday, April 14, 2010 - link

    Correct. The June 08 RC build of Iometer uses randomized data. Older versions used 0s.

    Take care,
    Anand
  • shawkie - Wednesday, April 14, 2010 - link

    Anand, do you therefore have any explanation for why the SandForce controller is apparently about 10x more efficient than the Intel one even on random (incompressible) data? Or can you see a mistake in my analysis?

Log in

Don't have an account? Sign up now