Our first look at the Intel Optane SSD 900p only included the smaller 280GB capacity. We now have added the 480GB model to our collection, and have started analyzing the power consumption of the fastest SSDs on the market.

This second look at the Optane SSD 900p doesn't change the overall picture much. As we speculated in our initial review, the design of the Optane SSD and its 3D XPoint memory means that performance does not scale with capacity the way most flash-based SSD designs do. The Optane SSD 900p uses a controller with seven channels for communicating with the 3D XPoint memory. The difference between the 280GB and 480GB models is merely a difference of three or five 3D XPoint dies per channel. Of the 28 memory package locations on the PCB, the 280GB model populates 21 with single die packages. The 480GB model uses all 28 spots, and half of the packages on the front are dual-die packages.

A single NAND flash die isn't enough to keep one of the controller's channels busy, because flash takes many microseconds to complete a read or write command, and even longer for erase commands. By contrast, 3D XPoint memory is fast enough that there is little to no performance to be gained from overlapping commands to multiple dies on a single channel. Increasing the number of dies per channel on an Optane SSD affects capacity and power consumption but not performance.

Intel recently let slip the existence of 960GB and 1.5TB versions of the 900p, through the disclosure of a product change notification about tweaks to the product labeling. The specifications for the larger capacities have not been confirmed but likely match the smaller models in every respect except power consumption. Since Intel has not officially announced the higher capacities yet, no MSRP or release date is available.

Intel Optane SSD 900P Specifications
Capacity 280 GB 480 GB 960 GB 1.5 TB
Controller Intel SLL3D
Memory Intel 128Gb 3D XPoint
Interface PCIe 3.0 x4
Form Factor HHHL Add-in card or
2.5" 15mm U.2
HHHL Add-in card HHHL Add-in card
(U.2 unknown)
Sequential Read 2500 MB/s TBD
Sequential Write 2000 MB/s TBD
Random Read IOPS 550k TBD
Random Write IOPS 500k TBD
Power Consumption 8W Read
13W Write
14W Burst
5W Idle
<20.5 W <24 W
Write Endurance 10 DWPD TBD
Warranty 5 years TBD
Recommended Price $389 ($1.39/GB) $599 ($1.25/GB) TBD TBD

Our SSD power measurement equipment burned out right before our first Optane SSD arrived. As of this week, we have newer and much better power measurement equipment on hand: a Quarch XLC Programmable Power Module. We'll explore its capabilities more in a future article. For now, we're filling in the missing power measurements from the past several reviews. Both of the Optane SSD 900p models have been re-tested with the Quarch power module on the entire test suite except for The Destroyer (so far). We haven't yet thoroughly validated the new power measurements against the results from our old meter so there may be some discrepancies, but the Optane SSDs draw so much power that any minor differences won't matter to this review. Everything that was tested with the old meter will eventually be re-tested on the Quarch power module, but we don't expect significant changes except to idle power measurements (where the Quarch power module should offer higher resolution).

Our first review of the Optane SSD 900p included a few puzzling results, most notably slightly higher performance when the ATSB Heavy and Light tests were run on a filled drive than when the Optane SSDs were freshly erased. One potential factor for this has since come to light: After first being powered on, Intel Optane SSDs perform a background data refresh process. This isn't necessary unless the SSD has been powered off for a long time, but the drive has no way to know how long it was without power. The documentation for the 750GB Optane SSD DC P4800X states this process can take up to three hours. We have not observed any clear transition in idle power during the first few hours after power-on, but there are occasional short periods where idle power drops by around 350-400mW (from around 3.5W).

Without a conclusive indication of whether background data refresh is happening and influencing benchmark results, we've re-tested the 280GB Optane SSD 900p for this review. Before running the synthetic benchmarks, the 900p was allowed to sit at idle for at least three hours. The ATSB tests were also conducted after an extended idle period, but the test system was rebooted between ATSB tests. Even with this precaution, there's still significant variability between test runs on the Optane SSD 900p and the full-drive performance is often better than freshly erased, so it appears there's another factor contributing to this behavior.

AnandTech 2017 SSD Testbed
CPU Intel Xeon E3 1240 v5
Motherboard ASRock Fatal1ty E3V5 Performance Gaming/OC
Chipset Intel C232
Memory 4x 8GB G.SKILL Ripjaws DDR4-2400 CL15
Graphics AMD Radeon HD 5450, 1920x1200@60Hz
Software Windows 10 x64, version 1703
Linux kernel version 4.12, fio version 2.21
AnandTech Storage Bench - The Destroyer
Comments Locked

69 Comments

View All Comments

  • Notmyusualid - Sunday, December 17, 2017 - link

    So, when you are at gun point, in a corner, you finally concede defeat?

    I think you need professional help.
  • tuxRoller - Friday, December 15, 2017 - link

    If you are staying with a single thread submission model Windows may we'll have a decent sized advantage with both iocp and rio. Linux kernel aio is just such a crap shoot that it's really only useful if you run big databases and you set it up properly.
  • IntelUser2000 - Friday, December 15, 2017 - link

    "Lower power consumption will require serious performance compromises.

    Don't hold your breath for a M.2 version of the 900p, or anything with performance close to the 900p. Future Optane products will require different controllers in order to offer significantly different performance characteristics"

    Not necessarily. Optane Memory devices show the random performance is on par with the 900P. It's the sequential throughput that limits top-end performance.

    While its plausible the load power consumption might be impacted by performance, not always true for idle. The power consumption in idle can be cut significantly(to 10's of mW levels) by using a new controller. It's reasonable to assume the 900P uses the controller derived from the 750, which is also power hungry.
  • p1esk - Friday, December 15, 2017 - link

    Wait, I don't get it: the operation is much simpler than flash (no garbage collection, no caching, etc), so the controller should be simpler. Then why does it consume more power?
  • IntelUser2000 - Friday, December 15, 2017 - link

    You are still confusing load power consumption with idle power consumption. What you said makes sense for load, when its active. Not for idle.

    Optane Memory devices having 1/3rd the idle power demonstrates its due to the controller. They likely wanted something with short TTM, so they chose whatever controller they had and retrofitted it.
  • rahvin - Friday, December 15, 2017 - link

    Optane's very nature as a heat based phase change material is always going to result in higher power use than NAND because it's always going to take more energy to heat a material up than it would to create a magnetic or electric field.
  • tuxRoller - Saturday, December 16, 2017 - link

    That same nature also means that it will require less energy per reset as the process node shrinks (roughly e~1/F).
    In general, pcm is a much more amenable to process scaling than nand.
  • CheapSushi - Friday, December 15, 2017 - link

    Keep in mind a big part of the sequential throughput limit is the fact that the Optane M.2s are x2 PCIe lanes. This AIC is x4. Most NAND M.2 sticks are x4 as well.
  • twotwotwo - Friday, December 15, 2017 - link

    I'm curious whether it's possible to get more IOPS doing random 512B reads, since that's the sector size this advertises.

    When the description of the memory tech itself came out, bit addressability--not having to read any minimum block size--was a selling point. But it may be that the controller isn't actually capable of reading any more 512B blocks/s than 4KB ones, even if the memory and the bus could handle it.

    I don't think any additional IOPS you get from smaller reads would help most existing apps, but if you were, say, writing a database you wanted to run well on this stuff, it'd be interesting to know that small reads help.
  • tuxRoller - Friday, December 15, 2017 - link

    Those latencies seem pretty high. Was this with Linux or Windows? The table on page one indicates both were used.
    Can you run a few of these tests against a loop mounted ram block device? I'm curious to see what both the min, average and standard deviation values of latency look like when the block layer is involved.

Log in

Don't have an account? Sign up now