For the past six months I've been working on research and testing for the next major AnandTech SSD article. I figured I had enough time to line up its release with the first samples of the next-generation of high end SSDs. After all, it seems like everyone was taking longer than expected to bring out their next-generation controllers. I should've known better.

At CES this year we had functional next-generation SSDs based on Marvell and SandForce controllers. The latter was actually performing pretty close to expectations from final hardware. Although I was told that drives wouldn't be shipping until mid-Q2, it was clear that preview hardware was imminent. It was the timing that I couldn't predict.

A week ago, two days before I hopped on a flight to Barcelona for MWC, a package arrived at my door. OCZ had sent me a preproduction version of their first SF-2500 based SSD: the Vertex 3 Pro. The sample was so early that it didn't even have a housing, all I got was a PCB and a note:

Two days isn't a lot of time to test an SSD. It's enough to get a good idea of overall performance, but not enough to find bugs and truly investigate behavior. Thankfully the final release of the drive is still at least 1 - 2 months away, so this article can serve as a preview.

The Architecture

I've covered how NAND Flash works numerous times in the past, but I'll boil it all down to a few essentials.

NAND Flash is non-volatile memory, you can write to it and it'll store a charge even if you remove power from the device. Erase the NAND too many times and it will stop being able to hold a charge. There are two types of NAND that we deal with: single-level cell (SLC) and multi-level cell (MLC). Both are physically the same, you just store more data in the latter which drives costs, performance and reliability down. Two-bit MLC is what's currently used in consumer SSDs, the 3-bit stuff you've seen announced is only suitable for USB sticks, SD cards and other similar media.

Writes to NAND happen at the page level (4KB or 8KB depending on the type of NAND), however you can't erase a single page. You can only erase groups of pages at a time in a structure called a block (usually 128 or 256 pages). Each cell in NAND can only be erased a finite number of times so you want to avoid erasing as much as possible. The way you get around this is by keeping data in NAND as long as possible until you absolutely have to erase it to make room for new data. SSD controllers have to balance the need to optimize performance with the need to write evenly to all NAND pages. Conventional controllers do this by keeping very large tables that track all data being written to the drive and optimizes writes for performance and reliability. The controller will group small random writes together and attempt to turn them into large sequential writes that are easier to burst across all of the NAND devices. Smart controllers will even attempt to reorganize data while writing in order to keep performance high for future writes. All of this requires the controller to keep track of lots of data, which in turn requires the use of large caches and DRAMs to make accessing that data quick. All of this work is done to ensure that the controller only writes data it absolutely needs to write.

SandForce's approach has the same end goal, but takes a very different path to get there. Rather than trying to figure out what to do with the influx of data, SandForce's approach simply writes less data to the NAND. Using realtime compression and data deduplication techniques, SandForce's controllers attempt to reduce the size of what the host is writing to the drive. The host still thinks all of its data is being written to the drive, but once the writes hit the controller, the controller attempts to reduce the data as much as possible.

The compression/deduplication is done in realtime and what results is potentially awesome performance. Writing less data is certainly faster than writing everything. Similar technologies are employed by enterprise SAN solutions, but SandForce's algorithms are easily applicable to the consumer world. With the exception of large, highly compressed multimedia files (think videos, MP3s) most of what you write to your HDD/SSD is pretty easily compressible.

You don't get any extra space with SandForce's approach, the drive still has to accommodate the same number of LBAs as it advertises to the OS. After all, you could write purely random data to the drive, in which case it'd behave like a normal SSD without any of its superpowers.

Since the drive isn't storing your data bit for bit but rather storing hashes, it's easier for SandForce to do things like encrypt all of the writes to the NAND (which it does by default). By writing less, SandForce also avoids having to use a large external DRAM - its designs don't have any DRAM cache. SandForce also claims to be able to use its write-less approach in order to use less reliable NAND, in order to ensure reliability the controller actually writes some amount of redundant data. Data is written across multiple NAND die in parallel along with additional parity data. The parity data occupies the space of a single NAND die. As a result, SandForce drives set aside more spare area than conventional controllers.

What's New

Everything I've described up to this point applies to both the previous generation (SF-1200/1500) and the new generation (SF-2200/2500) of SandForce controllers. Now let's go over what's new:

1) Toggle Mode & ONFI 2 NAND support. Higher bandwidth NAND interfaces mean we should see much better performance without any architectural changes.

2) To accommodate the higher bandwidth NAND SandForce increased the size of on-chip memories and buffers as well as doubled the number of NAND die that can be active at one time. Finally there's native 6Gbps support to remove any interface bottlenecks. Both 1 & 2 will manifest as much higher read/write speed.

3) Better encryption. This is more of an enterprise feature but the SF-2000 controllers support AES-256 encryption across the drive (and double encryption to support different encryption keys for separate address ranges on the drive).

4) Better ECC. NAND densities and defect rates are going up, program/erase cycles are going down. The SF-2000 as a result has an improved ECC engine.

All of the other features that were present in the SF-1200/1500 are present in the SF-2000 series.

The Unmentionables: NAND Mortality Rate
Comments Locked

144 Comments

View All Comments

  • Out of Box Experience - Tuesday, February 22, 2011 - link

    Thanks for answering my question

    and you are right

    with over 50% of all PCs still running XP, it would indeed be stupid for the major SSD companies to overlook this important segment of the market

    with their new SSDs ready to launch for Windows 7 machines, they should be releasing plug and play replacements for all the XP machines out there any day now..................NOT!

    Are they stupid or what??

    no conspiracy here folks
    just the facts
  • Kjella - Thursday, February 24, 2011 - link

    Fact: Most computers end their life with the same hardware they started with. Only a small DIY market actually upgrades their hard disk and migrates their OS/data. So what if 50% runs XP? 49% of those won't replace their HDD with an SSD anyway. They might get a new machine with an SSD though, and almost all new machines get Windows 7 now.
  • Cow86 - Thursday, February 17, 2011 - link

    Very interesting indeed....good article too. One has to wonder though - looking at what is currently happening with 25 nm NAND in vertex 2 drives, which have lower performance and reliability than their 34 nm brethren ánd are sold at the same price without any indication - how the normal Vertex 3 will fare...Hoping they'll be as good in that regard as the original vertex 2's, and I may well indeed jump on the SSD bandwagon this year :) Been holding off for lower price (and higher performance, if I can get it without a big price hike); I want 160 GB to be able to have all my games and OS on there.
  • lecaf - Thursday, February 17, 2011 - link

    Vertex 3 with 25 NAND will also suffer performance loss.

    It is not the NAND it self having the issue but the numbers of the chips. You get same capacity with half the chips, so the controller has less opportunity to write in parallel.

    This is the same reason why with Crucial's C300 the larger (256) drive is faster than the smaller (128).

    Speed will drop for smaller drivers but if price goes down this will be counterbalanced by larger capacity faster drives.

    The "if" is very questionable of course considering that OCZ replaced NAND on current Vertex2 with no price cut (not even a change in part number; you just discover you get a slower drive after you mount it)
  • InsaneScientist - Thursday, February 17, 2011 - link

    Except that there are already twice as many chips as there are channels (8 channels, 16 NAND chips - see pg 3 of the article), so halving the number of chips simply brings the channel to chip ratio down to 1:1, which is hardly a problem.
    It's when you have unused channels that things slow down.
  • lecaf - Thursday, February 17, 2011 - link

    1:1 can be a problem... depending who is the bottleneck.

    If NAND speed saturates the channel bandwidth then I agree there is no issue, but if the channel has available bandwidth, it could use it to feed an extra NAND and speed up things.

    But that's theory ... check benchmarks here:
    http://www.storagereview.com/ocz_vertex_2_25nm_rev...
  • Chloiber - Thursday, February 17, 2011 - link

    It's possible to use 25nm chips with the same capacity, as OCZ is trying to do right now with the 25nm replacements of the Vertex 2.
  • Nentor - Thursday, February 17, 2011 - link

    Why are they making these flash chips smaller if there are the lower performance and reliability problems?

    What is wrong with 34nm?

    I can understand with cpu there are the benefits of less heat and such, but with the flash chips?
  • Zshazz - Thursday, February 17, 2011 - link

    It's cheaper to produce. Less materials used and higher number of product output.
  • semo - Thursday, February 17, 2011 - link

    OCZ should spend less time sending out drives with no housing and work on correctly marketing and naming their 25nm Vertex 2 drives.

    http://forums.anandtech.com/showthread.php?t=21433...

    How can OCZ get away with calling a 55GB drive "60GB" and then trying to bamboozle everyone with technicalities and SandForce marketing words and abbreviations is beyond me.

    It wasn't too long when they were in hot water with their jmicron Core drives and now they're doing this?

Log in

Don't have an account? Sign up now