I have to admit that Intel's Z68 launch was somewhat anti-climactic for me. It was the chipset we all wanted when Sandy Bridge first arrived, but now four months after Sandy Bridge showed up there isn't all that much to be excited about - save for one feature of course: Smart Response Technology (aka SSD caching). The premise is borrowed from how SSDs are sometimes used in the enterprise space: put a small, fast SSD in front of a large array of storage and use it to cache both reads and writes. This is ultimately how the memory hierarchy works - hide the latency of larger, cheaper storage by caching frequently used data in much faster, but more expensive storage.

I believe there's a real future with SSD caching, however the technology needs to go mainstream. It needs to be available on all chipsets, something we won't see until next year with Ivy Bridge. Even then, there's another hurdle: the price of the SSD cache.

Alongsize Z68 Intel introduced the SSD 311, codename Larson Creek. The 20GB SSD uses 34nm SLC NAND, thus pricing the drive more like a 40GB MLC SSD at $110. Intel claims that by using SLC NAND it can deliver the write performance necessary to function as a good cache. Our benchmarks showed just that. The 20GB SSD 311 performed a lot like a 160GB Intel X25-M but with half of the NAND channels thanks to SLC NAND's faster write speed and some firmware tweaks. In fact, the only two complaints I had about the 311 were its limited capacity and price.

The capacity issue proved to be a problem as I found that after almost a dozen different application launches it wasn't too hard to evict useful data from the cache. The price is also a problem because for $100 more you can pick up a 120GB Vertex 2 and manage your data manually with much better performance overall.

Yesterday a friend pointed me at a now defunct deal at Newegg. For $85 Newegg would sell you a 40GB SF-1200 based Corsair Force SSD. That particular deal is done with and all that remains is the drive for $110, but it made me wonder - how well would a small SandForce drive do as an SSD cache? There's only one way to find out.

The Test

CPU

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO

Motherboard:

Intel Z68 Motherboard

Chipset:

Intel Z68

Chipset Drivers:

Intel 9.1.1.1015 + Intel RST 10.5

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: Intel HD Graphics 3000
Video Drivers: Intel GMA Driver for Windows 8.15.10.2372
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

 

Random/Sequential Read & Write Performance
Comments Locked

81 Comments

View All Comments

  • c4v3man - Friday, May 13, 2011 - link

    ...because there are still some boards out there that haven't been recalled, and the potential bad press from someone using the cache feature and losing their data due to a failed port would be very damaging to their reputation. Z68 chipsets are unaffected due to their launch date...

    Anyone else think this may be the case?
  • Comdrpopnfresh - Friday, May 13, 2011 - link

    In it's current implementation, Intel's Smart Response Technology is self-defeating and a wasted allocation of SSD potential:

    -It doesn't provide benefit, or it bottlenecks, write performance to the storage system
    -Establishes an artificial NAND-wearing function of read operations; inflated number of writes is incurred on a ssd used as cache that is not on one used for storage
    -Squanders the potential of ssd benefits that would be recognized if the ssd was used as primary storage
    -I strongly doubt that data integrity is maintained under Maximized mode in the event of a power loss
    -A lot of the processes that would see the most benefit from ssd speeds don't see it applied under SRT (ex: antivirus)

    To mitigate the downsides of SRT you can:
    -use a larger ssd -use an SLC, rather than MLC ssd

    But the two of those solutions are mutually constraining, and the resultant configuration is at ends with the entire exercise of ssd-caching and intended purpose of Intel's SRT.

    The reason why these downsides put a damper on SRT but aren't expressed in ssd-caching in enterprise space is because the product constraints are paradoxal & divergent from the intentions and goals of the technology:
    1- If you place a ssd sized well enough to serve as the primary storage as ssd-cache with SRT, all results conclude it's better off as storage rather than cache. No benefits. The allowable size of a large ssd for use as ssd-caching is 64GB- the rest is a user-accessible partition that is along for the ride (dragged through the mud?)
    2- SRT is intended for small ssds, but the minimum size is 18.6GB. Even with Intel's own specially purposed SSD, all signs point to a larger ssd being necessary to get the most from SRT. But the point of SRT is to spend less and get away with a smaller ssd (if you consider 'getting away with' underutilizing ssd benefits, all the while coughing and hacking along the way, that's your perogative)
    -3 The chosen ssd needs sequentiual writes higher than the HDD being cached, or it will bottleneck writes storage. You can use an SLC model but doing so means reduced drive size and higher costs incurred- both in contention w/ SRT's purpose
    -4 For the necessary speeds to make SRT worthwhile, MLC is an option on ssd models of higher capacity because they have more channels to move data along. But then your not using a small drive, and castrating and otherwise delightful pirmary storage purposed ssd.
    -5 Higher write-cycles are mitigated by using SLC or large enough sized MLC for wear-leveling to retard degredation. This presents the same problems as in 4 and 5.

    In enterprise space, the scales are much larger, the budgets higher, and the implementations of hardware are performed to attain the proper ends or because it is the best option for the parameters of usage. Most likely large, expensive, SLC drives are used as cache for arrays with performance requirements necessitating their use: it all fits together like a hand in a glove.

    To be of real potential or an alternative to primary ssd-storage, the hardware allocation used is either a waste of hardware or doesn't yield a comparably worthwhile solution. its like taking that cozy enterprise glove and trying to make a mitten out of it
  • JNo - Sunday, May 15, 2011 - link

    +1

    SSDs much better used as main OS drive. Unless money no object, any money for a cache SSD is better spent on a bigger main SSD. A small cache SSD couldn't speed up my 2TB media drive anyway as most films are 9GB+ and would be sequential so ignored by cache. Game loads are the other area but again, better off with a larger drive and using Steam Mover to move the ones you want on to SSD as an when you need rather than slower speed, integrity losses and evictions and higher wear levelling of cache SSD set up. For OS drive you're much safer using enhanced mode (less performance) as maximised sounds too risky and virus scans still slow. Overall I can barely think of a situation where SRT benefits a lot of people much in a remotely economical fashion.
  • adamantinepiggy - Monday, May 16, 2011 - link

    I'm curious how badly this caching beats up the SSD. Like compdrpopn above, I assume there is a reason Intel chose SLC NAND, and I assume because of its much larger write cycles over MLC and faster write speeds when completely full. Figure a normal consumer SSD does not have the majority of it's cells re-written constantly nor is it generally completely full, while a SSD used to cache a hard drive "is" going to to be constantly full and completely re-written to every cell.

    Example, one installs the OS, MSOffice, and any other standard apps to a normal bootup SSD. In regular usage, other than pagefiles and temp files, the majority of the cells retain the same data pretty much forever over the life of the SSD. With a SSD used in a HD caching capacity, I assume it's going to cache until completely full very quickly, and then overwrite all the data continuously as it constantly caches different/new HD data. That's a lot of writing/erase cycles if the SSD, acting as a cache for the HD, if it gets flushed often. How is typical a typical MLC SSD gonna handle this wear pattern?

    Now take this with a grain of salt, as I am just conjecturing with my limited understanding of how Intel is actually caching the HD but coupled with the question on why Intel would press an relatively "expensive" SLC SSD for its SSD of choice in this particular usage leads me to believe that this type of HD caching duty is gonna beat up normal MLC SSD's as the wear pattern is not the same as the expected use patterns designed into consumer SSD firmware.
  • gfody - Friday, May 13, 2011 - link

    wouldn't an M4 or C300 perform better as cache since it has much lower latency than other SSDs?
  • Action - Saturday, May 14, 2011 - link

    I would second this comment. On the surface if would seem to me that Sandforce drives would be handicapped in this particular application as the overhead and latency of compressing the data would have a negative impact. A non-Sandforce drive would be the desired one to use and the M4 or C300 would appear to be the ones to try first over the Vertex or Agility drives in this particular application.
  • sparky0002 - Friday, May 13, 2011 - link

    Give us a 312 is sort of the message here.

    Double up on the NAND chips, use all 10 channels and as a side effect it would be 40gb.

    The current model 311 is too limiting in write speeds for more enthusiasts. the only option would be to run it in Enhanced mode so that write go direct to the platters.

    If that is then the case, then it would be nice to run a system of a good fast ssd with a massive traditional disk as storage and a ssd 311 to cache it.

    Now the question becomes if the platter has all your games and music and movies on it. just how good is intel cache dirtying policy? Load up a few games so they are in cache, then please go listen to 25gb of music. lol. and see if the games are still in cache.
  • Casper42 - Friday, May 13, 2011 - link

    So I cracked open my wife's new SB based HP Laptop and while there isnt a ton of room, it really makes me wonder if laptop vendors shouldn't be including the MBA style SSD socket inside their laptops.

    1) You could do a traditional Dual Drive design and have a 128GB-256GB Boot SSD in the sleeker MBA Form Factor with a traditional HDD in the normal 2.5" slot for storing data.
    2) With this new feature being retroactive on alot of existing laptops using the right chipset, and future laptop models as well, why not offer a combo with a 64GB SSD Stick pre-configured to be used by SRT along side the same traditional 2.5" HDD. This could be a $100-150 upgrade and I would assume it would produce even better results when boosting the traditionally slower laptop drives.

    Especially on 15.6" models I just cant see that they couldnt squeeze this in there somewhere.

    So perhaps as a follow up, you could grab an average 5400 and 7200 rpm laptop drive and run through the tests again with either of the 2 SSDs you tested so far, or if there is an adapter out there, the actual MBA 64GB SSD Stick drive.

    Thx
  • IlllI - Friday, May 13, 2011 - link

    how is this different from readyboost?

    how is this different than the cache that typical hard drives have had for years now?

    other than performance.. isnt it basically the same idea?

    and if so, i wonder how it seems to be much better/faster than those other two concepts
  • cbass64 - Friday, May 13, 2011 - link

    As far as I know, ReadyBoost only caches small, random reads. Any large reads are sent directly to the HDD. Writes aren't cached at all.

    Cache on HDD's are tiny...32, maybe 64MB. Just not big enough to make a real difference. If you made the cache any larger, the price of the drives would go way up. Plus they use cheap flash.

Log in

Don't have an account? Sign up now