The Samsung 960 Pro (2TB) SSD Review
by Billy Tallis on October 18, 2016 10:00 AM ESTAnandTech Storage Bench - The Destroyer
The Destroyer is an extremely long test replicating the access patterns of very IO-intensive desktop usage. A detailed breakdown can be found in this article. Like real-world usage and unlike our Iometer tests, the drives do get the occasional break that allows for some background garbage collection and flushing caches, but those idle times are limited to 25ms so that it doesn't take all week to run the test.
We quantify performance on this test by reporting the drive's average data throughput, a few data points about its latency, and the total energy used by the drive over the course of the test.
The 2TB 960 Pro sets a new record with a higher average data rate than the 950 Pro, but the improvement isn't huge, especially given the large increase in capacity over the 512GB 950 Pro.
The 2TB 960 Pro reduces the average service time by almost 30% compared to the next fastest drive. SATA SSDs can't deliver average service times this low even on the ATSB Light test.
For the first time, a drive has completed The Destroyer without any operations taking more than 100ms. Though at a tighter standard of 10ms, the improvement from the 950 Pro is relatively small.
Essentially no power efficiency was sacrificed for the increased performance of the 960 Pro over its predecessors, though some SATA drives are more efficient.
72 Comments
View All Comments
JoeyJoJo123 - Tuesday, October 18, 2016 - link
Not too surprised that Samsung, once again, achieves another performance crown for another halo SSD product.Eden-K121D - Tuesday, October 18, 2016 - link
Bring on the competitionibudic1 - Tuesday, October 18, 2016 - link
Intel 750 is better. The only thing that you can tell is random write 4K QD1-4. Also it's really bad when you don't have the consistency when you need it. There's nothing worse than a hanging application, it's about consistancy not outright speed. Which reminds me...When evaluating graphics cards a MINIMUM frame rate is WAY more important than average or maximum.
Just like in racing the slowest speed in the corner is what separates great cars from average.
Hopefully Anandtech can recognize this in future reviews
Flying Aardvark - Wednesday, October 19, 2016 - link
Exactly. Intel 750 is still the king for someone who seriously needs storage performance. 4K randoms and zero throttling.I'd stick with the Evo or 600P, 3D TLC stuff unless I really needed the performance then I'd go all the way up to the real professional stuff with the 750. I need a 1TB M.2 NVME SSD myself and eager to see street prices on the 960 EVO 1TB and Intel 600P 1TB.
iwod - Wednesday, October 19, 2016 - link
Exactly, when majority ( 90%+ ) of consumer usage is going to be based on QD1. Giving me QD32 numbers is like a Mpixel or Mhz race. I used to think we reached the limit of Random read write performance. Turns out we haven't actually improved Random Read Write QD1 much, hence it is likely still the bottleneck.And yes we need consistency in QD1 Random Speed test as well.
dsumanik - Wednesday, October 19, 2016 - link
Nice to see there are still some folks out there who arent duped by marketing, random write and full capacity consistency are the only 2 things a look at. When moving large video files around sequential speeds can help, but difference between 500 and 1000 mb/s isnt much, you start the copy then go do something else. In many cases random write is the bottleneck for the times you are waiting on the computer to "do something", and dictates if the computer feels "snappy". Likewise, performance loss when a drive is getting full also makes you 'notice' things are slowing down.Samsung if you are reading this, go balls out random write performance on the next generation, tyvm.
Samus - Wednesday, October 19, 2016 - link
You can't put an Intel 750 in a laptop though, and it also caps at 1.2TB. But your point is correct, it is a performance monster.edward1987 - Friday, October 28, 2016 - link
Intel SSD 750 SSDPEDMW400G4X1 PCI-Express-v3-x4 - HHHLAND Samsung SSD 960 PRO MZ-V6P512BW M.2 2280 NVMe
IOPS 230-430K VS 330K
ead speed (Max) 2200 VS 3500
Much better in comparison http://www.span.com/compare/SSDPEDMW400G4X1-vs-MZ-...
shodanshok - Tuesday, October 18, 2016 - link
Let me do a BIG WARNING against disabling write-buffer flushing. Any drive without special provisions for power loss (eg: supercapacitor), can really lose much data in the event of a unexpected power loss. In the worst scenario, entire filesystem loss can happen.What the two Windows settings do? In short:
1) "enable write cache on the device" enables the controller's private DRAM writeback cache and it is *required* for good performance on SSD drives. The reason exactly the one cited on the article: for good performance, flash memory requires batched writes. For example, with DRAM cache disabled I recorded write speed of 5 MB/s on a otherwise fast Crucial M550 256 GB. With DRAM cache enabled, the very same disk almost saturated the SATA link (> 400 MB/s).
However, a writeback cache imply some data loss risk. For that reason the IDE/SATA standard has some special commands to force a full cache flush when the OS need to be sure about data persistence. This bring us that second option...
2) "turn off write-cache buffer flushing on the device": this option should be absolutely NOT enabled on consumer, non-power-protected disks. With this option enabled, Windows will *not* force a full cache flush even on critical tasks (eg: update of NTFS metadata). This can have catastrophic consequence if power is loss at the wrong moment. I am not speaking about "simple", limited data loss, but entire filesystem corruption. The key reason for such a catastrophic behavior is that cache-flush command are not only used for store critical data, but for properly order their writeout also. In other words, with cache flushing disabled, key filesystem metadata can be written out of order. If power is lost during a incomplete, badly-reordered metadata writes, all sort of problems can happen.
This option exists for one, and only one, case: when your system has a power-loss-protected array/drives, you trust your battery/capacitor AND your RAID card/drive behave poorly when flushing is enabled. However, basically all modern RAID controllers automatically ignores cache flushes when the battery/capacitor are healthy, negating the needing to disable cache flushes software-side.
In short, if such a device (960 Pro) really need disabled cache flushing to shine, this is a serious product/firmware flaw which need to be corrected as soon as possible.
Br3ach - Tuesday, October 18, 2016 - link
Is power loss a problem for M.2 drives though? E.g. my PSU's (Corsair AX1200i) capacitors keeps the MB alive for probably 1 minute following power loss - plenty of time for the drive to flush any caches, no?