The Samsung 960 Pro (2TB) SSD Review
by Billy Tallis on October 18, 2016 10:00 AM ESTRandom Read Performance
The random read test requests 4kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test spans the entire drive, which is filled before the test starts. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.
The Samsung 960 Pro slightly widens what was already a commanding lead in low queue depth random read performance.
The 960 Pro's power usage is higher in proportion to its increased performance. Only a handful of the smallest and lowest-power SATA SSDs are more efficient, but at half the overall performance.
While they are unmatched at lower queue depths, both the 960 Pro and the 950 Pro under-perform expectations at QD32. This hardly matters for a consumer SSD.
Random Write Performance
The random write test writes 4kB blocks and tests queue depths ranging from 1 to 32. The queue depth is doubled every three minutes, for a total test duration of 18 minutes. The test is limited to a 16GB portion of the drive, and the drive is empty save for the 16GB test file. The primary score we report is an average of performances at queue depths 1, 2 and 4, as client usage typically consists mostly of low queue depth operations.
The 960 Pro's random write performance is a big improvement over the 950 Pro, catching up with the OCZ RD400 but still well behind the Intel 750.
In addition to greatly improving random write performance over the 950 Pro, the 960 Pro greatly improves power consumption and jumps to the top of the efficiency ranking, just ahead of the Crucial MX300.
Where thermal throttling prevented the 950 Pro from improving past QD2, the 960 Pro scales up to QD4 and plateaus at that level for the second half of the test, with somewhat steadier performance than the OCZ RD400 that draws more power and thus has more thermal throttling to contend with. The Intel 750 with its massive heatsink entirely avoids thermal throttling.
72 Comments
View All Comments
JoeyJoJo123 - Tuesday, October 18, 2016 - link
Not too surprised that Samsung, once again, achieves another performance crown for another halo SSD product.Eden-K121D - Tuesday, October 18, 2016 - link
Bring on the competitionibudic1 - Tuesday, October 18, 2016 - link
Intel 750 is better. The only thing that you can tell is random write 4K QD1-4. Also it's really bad when you don't have the consistency when you need it. There's nothing worse than a hanging application, it's about consistancy not outright speed. Which reminds me...When evaluating graphics cards a MINIMUM frame rate is WAY more important than average or maximum.
Just like in racing the slowest speed in the corner is what separates great cars from average.
Hopefully Anandtech can recognize this in future reviews
Flying Aardvark - Wednesday, October 19, 2016 - link
Exactly. Intel 750 is still the king for someone who seriously needs storage performance. 4K randoms and zero throttling.I'd stick with the Evo or 600P, 3D TLC stuff unless I really needed the performance then I'd go all the way up to the real professional stuff with the 750. I need a 1TB M.2 NVME SSD myself and eager to see street prices on the 960 EVO 1TB and Intel 600P 1TB.
iwod - Wednesday, October 19, 2016 - link
Exactly, when majority ( 90%+ ) of consumer usage is going to be based on QD1. Giving me QD32 numbers is like a Mpixel or Mhz race. I used to think we reached the limit of Random read write performance. Turns out we haven't actually improved Random Read Write QD1 much, hence it is likely still the bottleneck.And yes we need consistency in QD1 Random Speed test as well.
dsumanik - Wednesday, October 19, 2016 - link
Nice to see there are still some folks out there who arent duped by marketing, random write and full capacity consistency are the only 2 things a look at. When moving large video files around sequential speeds can help, but difference between 500 and 1000 mb/s isnt much, you start the copy then go do something else. In many cases random write is the bottleneck for the times you are waiting on the computer to "do something", and dictates if the computer feels "snappy". Likewise, performance loss when a drive is getting full also makes you 'notice' things are slowing down.Samsung if you are reading this, go balls out random write performance on the next generation, tyvm.
Samus - Wednesday, October 19, 2016 - link
You can't put an Intel 750 in a laptop though, and it also caps at 1.2TB. But your point is correct, it is a performance monster.edward1987 - Friday, October 28, 2016 - link
Intel SSD 750 SSDPEDMW400G4X1 PCI-Express-v3-x4 - HHHLAND Samsung SSD 960 PRO MZ-V6P512BW M.2 2280 NVMe
IOPS 230-430K VS 330K
ead speed (Max) 2200 VS 3500
Much better in comparison http://www.span.com/compare/SSDPEDMW400G4X1-vs-MZ-...
shodanshok - Tuesday, October 18, 2016 - link
Let me do a BIG WARNING against disabling write-buffer flushing. Any drive without special provisions for power loss (eg: supercapacitor), can really lose much data in the event of a unexpected power loss. In the worst scenario, entire filesystem loss can happen.What the two Windows settings do? In short:
1) "enable write cache on the device" enables the controller's private DRAM writeback cache and it is *required* for good performance on SSD drives. The reason exactly the one cited on the article: for good performance, flash memory requires batched writes. For example, with DRAM cache disabled I recorded write speed of 5 MB/s on a otherwise fast Crucial M550 256 GB. With DRAM cache enabled, the very same disk almost saturated the SATA link (> 400 MB/s).
However, a writeback cache imply some data loss risk. For that reason the IDE/SATA standard has some special commands to force a full cache flush when the OS need to be sure about data persistence. This bring us that second option...
2) "turn off write-cache buffer flushing on the device": this option should be absolutely NOT enabled on consumer, non-power-protected disks. With this option enabled, Windows will *not* force a full cache flush even on critical tasks (eg: update of NTFS metadata). This can have catastrophic consequence if power is loss at the wrong moment. I am not speaking about "simple", limited data loss, but entire filesystem corruption. The key reason for such a catastrophic behavior is that cache-flush command are not only used for store critical data, but for properly order their writeout also. In other words, with cache flushing disabled, key filesystem metadata can be written out of order. If power is lost during a incomplete, badly-reordered metadata writes, all sort of problems can happen.
This option exists for one, and only one, case: when your system has a power-loss-protected array/drives, you trust your battery/capacitor AND your RAID card/drive behave poorly when flushing is enabled. However, basically all modern RAID controllers automatically ignores cache flushes when the battery/capacitor are healthy, negating the needing to disable cache flushes software-side.
In short, if such a device (960 Pro) really need disabled cache flushing to shine, this is a serious product/firmware flaw which need to be corrected as soon as possible.
Br3ach - Tuesday, October 18, 2016 - link
Is power loss a problem for M.2 drives though? E.g. my PSU's (Corsair AX1200i) capacitors keeps the MB alive for probably 1 minute following power loss - plenty of time for the drive to flush any caches, no?