Intel Launches Optane Memory M.2 Cache SSDs For Consumer Marketby Billy Tallis on March 27, 2017 12:00 PM EST
- Posted in
- SSD Caching
- 3D XPoint
- Optane Memory
Last week, Intel officially launched their first Optane product, the SSD DC P4800X enterprise drive. This week, 3D XPoint memory comes to the client and consumer market in the form of the Intel Optane Memory product, a low-capacity M.2 NVMe SSD intended for use as a cache drive for systems using a mechanical hard drive for primary storage.
The Intel Optane Memory SSD uses one or two single-die packages of 3D XPoint non-volatile memory to provide capacities of 16GB or 32GB. The controller gets away with a much smaller package than most SSDs (especially PCIe SSD) since it only supports two PCIe 3.0 lanes and does not have an external DRAM interface. Because only two PCIe lanes are used by the drive, it is keyed to support M.2 type B and M slots. This keying is usually used for M.2 SATA SSDs while M.2 PCIe SSDs typically use only the M key position to support four PCIe lanes. The Optane Memory SSD will not function in a M.2 slot that provides only SATA connectivity. Contrary to some early leaks, the Optane Memory SSD uses the M.2 2280 card size instead of one of the shorter lengths. This makes for one of the least-crowded M.2 PCBs on the market even with all of the components on the top side.
The very low capacity of the Optane Memory drives limits their usability as traditional SSDs. Intel intends for the drive to be used with the caching capabilities of their Rapid Storage Technology drivers. Intel first introduced SSD caching with their Smart Response Technology in 2011. The basics of Optane Memory caching are mostly the same, but under the hood Intel has tweaked the caching algorithms to better suit 3D XPoint memory's performance and flexibility advantages over flash memory. Optane Memory caching is currently only supported on Windows 10 64-bit and only for the boot volume. Booting from a cached volume requires that the chipset's storage controller be in RAID mode rather than AHCI mode so that the cache drive will not be accessible as a standard NVMe drive and is instead remapped to only be accessible to Intel's drivers through the storage controller. This NVMe remapping feature was first added to the Skylake-generation 100-series chipsets, but boot firmware support will only be found on Kaby Lake-generation 200-series motherboards and Intel's drivers are expected to only permit Optane Memory caching with Kaby Lake processors.
|Intel Optane Memory Specifications|
|Capacity||16 GB||32 GB|
|Form Factor||M.2 2280 single-sided|
|Interface||PCIe 3.0 x2 NVMe|
|Memory||128Gb 20nm Intel 3D XPoint|
|Typical Read Latency||6 µs|
|Typical Write Latency||16 µs|
|Random Read (4 KB, QD4)||300k|
|Random Write (4 KB, QD4)||70k|
|Sequential Read (QD4)||1200 MB/s|
|Sequential Write (QD4)||280 MB/s|
|Power Consumption||3.5 W (active), 0.9-1.2 W (idle)|
|Release Date||April 24|
Intel has published some specifications for the Optane Memory drive's performance on its own. The performance specifications are the same for both capacities, suggesting that the controller has only a single channel interface to the 3D XPoint memory. The read performance is extremely good given the limitation of only one or two memory devices for the controller to work with, but the write throughput is quite limited. Read and write latency are very good thanks to the inherent performance advantage of 3D XPoint memory over flash. Endurance is rated at just 100GB of writes per day, for both 16GB and 32GB models. While this does correspond to 3-6 DWPD and is far higher than consumer-grade flash based SSDs, 3D XPoint memory was supposed to have vastly higher write endurance than flash and neither of the Optane products announced so far is specified for game-changing endurance. Power consumption is rated at 3.5W during active use, so heat shouldn't be a problem, but the idle power of 0.9-1.2W is a bit high for laptop use, especially given that there will also be a hard drive drawing power.
Intel's vision is for Optane Memory-equipped systems to offer a compelling performance advantage over hard drive-only systems for a price well below an all-flash configuration of equal capacity. The 16GB Optane Memory drive will retail for $44 while the 32GB version will be $77. As flash memory has declined in price over the years, it has gotten much easier to purchase SSDs that are large enough for ordinary use: 256GB-class SSDs start at around the same price as the 32GB Optane Memory drive, and 512GB-class drives are about the same as the combination of a 2TB hard drive and the 32GB Optane Memory. The Optane Memory products are squeezing into a relatively small niche for limited budgets that require a lot of storage and want the benefit of solid state performance without paying the full price of a boot SSD. Intel notes that Optane Memory caching can be used in front of hybrid drives and SATA SSDs, but the performance benefit will be smaller and these configurations are not expected to be common or cost effective.
The Optane Memory SSDs are now available for pre-order and are scheduled to ship on April 24. Pre-built systems equipped with Optane Memory should be available around the same time. Enthusiasts with large budgets will want to wait until later this year for Optane SSDs with sufficient capacity to use as primary storage. True DIMM-based 3D XPoint memory products are on the roadmap for next year.
Post Your CommentPlease log in or sign up to comment.
View All Comments
Notmyusualid - Tuesday, March 28, 2017 - linkDum-ass Joe public, doesn't care for a $90 SSD, and will just see one product (laptop / desktop) with a 240/250GB hard disk, and another product with a 1/2TB hard disk.
Even when he gets it home, he feels he has made the better decision. This product helps to plug those gaps, giving him a SSD 'feel' to his system, and the 'better' amount of storage too. Every dollar counts to OEMs, and customers.
I have tried Intel's caching 20GB SLC drive in both Z68 systems, and in my own laptop, in front of a 2TB slow as hell mechanical drive. I was very impressed indeed. Actually that 20GB SLC drive is still functional, and kicking around somewhere. I can't have imagined an equivelent-aged 80GB MLC drive would still be alive under the same usage...
Lolimaster - Monday, March 27, 2017 - linkAt it's 16GB for that $44 vs 250GB for $80-90 of virtually the same fast type of storage (for typical consumer scenarios)
dullard - Tuesday, March 28, 2017 - linkThere aren't just two options. The best option is $44 cache AND $80 SSD. This isn't an either/or situation.
fanofanand - Monday, March 27, 2017 - linkI wouldn't imagine that a ton of users who have kaby lake but no ssd are savvy enough to install an m.2 module. The low end computer with an owner wanting optane is a unicorn.
Samus - Tuesday, March 28, 2017 - linkLeave it to Intel to pull a 10 year old technology out of the box (readyboost) and rebadge it to a new, slightly faster (and in the event of write caching, slower) product, and charge a shitload for it.
"Intel notes that Optane Memory caching can be used in front of hybrid drives and SATA SSDs, but the performance benefit will be smaller and these configurations are not expected to be common or cost effective."
Yeah, that's because you neutered it with a slow ass PCIe interface.
Readyboost was a failure, RST is inherently complex and overall, sucks (Apple Fusion drive based on the same concept is substantially better, mostly because of the larger 128GB caching SSD) and who are they kidding, SRT wasn't much of an improvement over Readyboost.
Optane is fascinating tech, but where is it? What the hell good is it if they can't scale it up to usable sizes. The purpose of NV memory is to move away from mechanical storage, not supplement it. This doesn't fix all the other problems with having a hard drive, especially durability, power consumption and physical size. And it barely addresses the performance bottleneck. Even a 64GB cache on a 2TB drive is 1:18 caching ratio. Sure is better than an SSHD but it's also more expensive, more complex, and less compatible.
I can't believe Intel spent time redeveloping such a stupid fucking concept here.
CaedenV - Tuesday, March 28, 2017 - linkThe point of optane wasn't to cache the HDD/SSD, it was to replace RAM for instant-off/on and the ability to remove the need for traditional sleep states. But it didn't work, so they are marketing it as a HDD/SSD cache, and it will fail just as hard as the last 2 times it was attempted.
garygech - Tuesday, March 28, 2017 - linkIf they deliver the technology on time with a matched mobile mother board, and can lower the idle power consumption on a Cannon Lake Platform, with 64 GB or 128 GB soldered onto the device, they might have a matched system that is remarkably fast due to low latency. The real benefit of a matched system would be superior performance on a lower TDP, so that you can get 10 hours out of a laptop that is 1 pound at a price point of $899 for an integrated system, like the Surface Pro 5. Otherwise, what is the point, if you have plenty of power, just go with a much larger and less expensive SSD.
CaedenV - Tuesday, March 28, 2017 - linkThe problem is that there is no benefit.... at least not in this config.
The idea was that SSDs were stuck at SATA3 speeds and demand for more speed was coming hot and heavy, so Optane/3DxPoint was developed as a cache technology to bring a bit more performance to systems that were already screaming fast by replacing the need for RAM. While technically a little slower than the RAM it replaced, the Optane memory would allow for full power-off without loosing stored memory. This would allow for literal instant-off, instant-on capabilities in devices as things would not need to be spooled back into RAM. While not quite as fast as DRAM, it would be more than fast enough for most applications (as they are bottle-necked by the CPU and GPU rather than memory bandwidth), while offering up to 32GB at prices far lower than RAM. And this was all supposed to happen 2-3 years ago.
Well, it got stuck in development hell. NVMe SSDs hit the market with the m.2 interface allowing for far higher performance SSDs. DDR4 was released, offering higher density and lower prices than DDR3 (though about the same performance). Plus I suspect (total speculation here) there were issues adapting OSs to the new RAM-less architecture, which made it a ram augment rather than a RAM replacement.
So what are we stuck with? essentially what we had back with the Sandy Bridge architecture a few years back. SSDs were very expensive, so Intel made the ability to use a SSD as a cache for a HDD which could offer extreme performance gains. It too was limited to 32GB (though I believe a firmware update allowed for 64GB cache later), and required a RAID setup, and pretty much all of the same limitations we see here... and nobody used it. By the time it was released SSDs were tanking in price. The added speed was inconsistent, SSHDs did the same thing better, there were battery life issues, etc. etc. etc. All the same proposed benefits, just as the high performance tech was doping in price, all the same pitfalls, and again no manufacturer in their right mind would use this. We might *might* see this in a few low-end ultrabooks as a way to offer higher capacity while still getting the 'ultrabook' tag for having a 'ssd', but that may be about it.
That said, in the server market this is going to be a huge plus! Being able to have a 2TB cache on a large RAID array will help a lot of workloads (especially in multi-user databases and virtual environments) while being cheaper than a RAM drive and faster than a SSD. There are still uses for this tech, and it will be a big deal... just not in the consumer space, and not how it was originally promised a few years ago when it was still being developed.
repoman27 - Monday, March 27, 2017 - linkErm, so the QD4 4K throughout is the same as the sequential then? That's odd.
The power consumption is ridiculous.
Anyone know what the outlook is for die stacking with 3D-XPoint?
Billy Tallis - Monday, March 27, 2017 - linkHaving similar sequential and random throughput is unsurprising for 3D XPoint since it doesn't have large page sizes and massive erase block sizes to contend with.
Wire-bonded die stacking for 3D XPoint is no more complicated than for NAND; Intel's doing it on the enterprise SSDs where they don't have an excess of PCB space. Vertical scaling of the layer count within a single die might be easier than increasing the layer count for 3D NAND, but it's too soon to say for sure.