We are running a bit late with our Flash Memory Summit coverage as I did not get back from the US until last Friday, but I still wanted to cover the most interesting tidbits of the show. ULLtraDIMM (Ultra Low Latency DIMM) was initially launched by SMART Storage a year ago but SanDisk acquired the company shortly after, which made ULLtraDIMM a part of SanDisk's product portfolio.

The ULLtraDIMM was developed in partnership with Diablo Technologies and it is an enterprise SSD that connects to the DDR3 interface instead of the traditional SATA/SAS and PCIe interfaces. IBM was the first to partner with the two to ship the ULLtraDIMM in servers, but at this year's show SanDisk announced that Supermicro will be joining as the second partner to use ULLtraDIMM SSDs. More specifically Supermicro will be shipping ULLtraDIMM in its Green SuperServer and SuperStorage platforms and availability is scheduled for Q4 this year. 

SanDisk ULLtraDIMM Specifications
Capacities 200GB & 400GB
Controller 2x Marvell 88SS9187
NAND SanDisk 19nm MLC
Sequential Read 1,000MB/s
Sequential Write 760MB/s
4KB Random Read 150K IOPS
4KB Random Write 65K IOPS
Read Latency 150 µsec
Write Latency < 5 µsec
Endurance 10/25 DWPD (random/sequential)
Warranty Five years

We have not covered the ULLtraDIMM before, so I figured I would provide a quick overview of the product as well. Hardware wise the ULLtraDIMM consists of two Marvell 88SS9187 SATA 6Gbps controllers, which are configured in an array using a custom chip with a Diablo Technologies label, which I presume is also the secret behind DDR3 compatibility. ULLtraDIMM supports F.R.A.M.E. (Flexible Redundant Array of Memory Elements) that utilizes parity to protect against page/block/die level failures, which is SanDisk's answer to SandForce's RAISE and Micron's RAIN. Power loss protection is supported as well and is provided by an array of capacitors. 

The benefit of using a DDR3 interface instead of SATA/SAS or PCIe is lower latency because the SSDs sit closer to the CPU. The memory interface has also been designed with parallelism in mind and can thus take greater advantage of multiple drives without sacrificing performance or latency. SanDisk claims write latency of less then five microseconds, which is lower than what even PCIe SSDs offer (e.g. Intel SSD DC P3700 is rated at 20µs).

Unfortunately there are no third party benchmarks for the ULLtraDIMM (update: there actually are benchmarks) so it is hard to say how it really stacks up against PCIe SSDs, but the concept is definitely intriguing. In the end, NAND flash is memory and putting it on the DDR3 interface is logical, even though NAND is not as fast as DRAM. NVMe is designed to make PCIe more flash friendly but there are still some intensive workloads that should benefit from the lower latency of the DDR3 interface. Hopefully we will be able to get a review sample soon, so we can put ULLtraDIMM through our own tests and see how it really compares with the competition.

POST A COMMENT

30 Comments

View All Comments

  • jamescox - Monday, August 18, 2014 - link

    I don't think you have an understanding of the actual problems that virtual memory solve. The ability to swap pages out to mass storage is only one of the minor benefits virtual memory systems compared to "real" memory systems. What you are describing with process ids is still a virtual memory system, just a very simplistic and limited form. You would still have overhead of calculating a real address, while losing a significant number of other features. Virtual memory is not going to go away anytime soon. Reply
  • p1esk - Tuesday, August 19, 2014 - link

    Datapath could be designed so there's no overhead of calculating the address if the offset is derived from the process id.

    Please do explain what are "the actual problems that virtual memory solve"? Other than those two I mentioned?
    Reply
  • hojnikb - Monday, August 18, 2014 - link

    Would it be possible to use something like that (if fast enough and latency low enough) as a "Instant" on system ? Reply
  • Cerb - Monday, August 18, 2014 - link

    From full off? No. From a hibernate-like state? Quite likely. Reply
  • willis936 - Tuesday, August 19, 2014 - link

    Correct me if I'm wrong but it doesn't look bootable for consumers yet. It's certainly cool and is a good compromise between SATA SSD and RAM drive for hosting minecraft (see: unreasonable amounts of random I/O read). Still, nothing beats a ram drive if you just have too much memory, don't care about the high risk of data loss, and are willing to go through the effort of making a Proper backup system. Reply
  • ats - Tuesday, August 19, 2014 - link

    So here's the problem...

    You assume VirtMem came about to page memory to disk. That's fine. But it isn't what its been used for for quite some time. Its extremely rare to non-existent for a modern machine to be over committed in memory.

    The reality is that VM is used for: program correctness, isolation, security, sharing, et al.

    Any attempt to replace a modern VM system is just going to end up looking exactly like a modern VM system. There's basically only 1 alternative out there and it is actually significantly more complex and prone to significantly more issues.

    And trying to replace DRAM with NAND is pretty much a losing proposition. You will be replacing something with order nS latency with uS latency. Its never going to be a good trade-off, regardless of how much "cache" a processor has.
    Reply
  • ShieTar - Tuesday, August 19, 2014 - link

    NAND flash may not become feasible, but doesn't NOR flash provide both shorter latencies and a generally more RAM-like programming logic? I know that NOR is used in some embedded devices to run software directly from the flash, without copying it to RAM first. Reply
  • p1esk - Tuesday, August 19, 2014 - link

    (I'm assuming you're responding to my comment above.)

    I acknowledged the virtmem is being used for program isolation, and I suggested a vastly simpler method to do that. Physical separation of address spaces on disk, as opposed to a hybrid logical separation of RAM/disk, avoids most of the complexity of modern virtual memory implementation. Essentially, address translation becomes as trivial as a few extra logic gates in the program counter. No overhead, no need for complex page tables.

    I agree that at present DRAM is much faster, however, first of all, the gap is closing. There's a lot of innovation going on currently in the field of non-volatile memory, so even if not NAND/NOR, some other technology, such as RRAM, might become viable.
    Also, you're underestimate the amount of cache that can be placed on die once you start stacking layers of SRAM vertically. In fact, it might be possible in the future to eliminate DRAM by simply having enough SRAM capacity (for example, as 100 layers on top of CPU).
    Reply
  • nofumble62 - Friday, August 22, 2014 - link

    I think this is a stupid idea to sacrifice a DDR channel for a storage drive. A DDR channel requires over 200 pins, while 4 PCIe channels only require 20-30 pins.
    DDR channel should be used for memory to improve computing performance. There is not a lot of application that requires constant read and write data out of storage, except upon booting up and shutting down. So this is a very stupid idea.
    Reply
  • WizardMerlin - Friday, December 12, 2014 - link

    In the limited field in which you work that may be the case. I can think of several use cases where this is an improvement and more memory would not help. Reply

Log in

Don't have an account? Sign up now