Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Micron M600 256GB
Default
25% Over-Provisioning

The 1TB M600 actually performs quite significantly worse than the 256GB model, which is most likely due to the tracking overhead that the increased capacity causes (more pages to track). Overall IO consistency has not really changed from the MX100 as Dynamic Write Acceleration only affects burst performance. I suspect the firmware architectures for sustained performance are similar between the MX100 and M600, although with added over-provisioning the M600 is a bit more consistent.

Micron M600 256GB
Default
25% Over-Provisioning

Micron M600 256GB
Default
25% Over-Provisioning

TRIM Validation

To test TRIM, I filled the 128GB M600 with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

It appears that TRIM does not fully recover the SLC cache as the acceleration capacity seems to be only ~7GB. I suspect that giving the drive some idle time would do the trick because it might take a couple of minutes (or more) for the internal garbage collection to finish after issuing a TRIM command.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
Comments Locked

56 Comments

View All Comments

  • Kristian Vättö - Tuesday, September 30, 2014 - link

    We used to do that a couple of years ago but then we reached a point where SSDs became practically indistinguishable. The truth is that for light workloads what matters is that you have an SSD, not what model the SSD actually is. That is why we are recommending the MX100 for the majority of users as it provides the best value.

    I think our Light suite already does a good job at characterizing performance under typical consumer workloads. The differences between drives are small, which reflects the minimal difference one would notice in real world with light usage. It's not overly promoting high-end drives like purely synthetic tests do.

    Then again, that applies to all components. It's not like we test CPUs and GPUs under typical usage -- it's just the heavy use cases. I mean, we could test the application launch speed in our CPU reviews, but it's common knowledge that CPUs today are all so fast that the difference is negligible. Or we could test GPUs for how smoothly they can run Windows Aero, but again it's widely known that any modern GPU can handle that just fine.

    The issue with testing heavy usage scenarios in real world is the number of variables I mentioned earlier. There tends to be a lot of multitasking involved, so creating a reliable test is extremely hard. One huge problem is the variability of user input speed (i.e. how quickly you click things etc -- this vary from round to round during testing). That can be fixed with excellent scripting skills, but unfortunately I have a total lack of those.

    FYI, I spent a lot of time playing around with real world tests about a year ago, but I was never able to create something that met my criteria. Either the test was too basic (like installing an app) that showed no difference between drives, or the results wouldn't be consistent when adding more variables. I'm not trying to avoid real world tests, not at all, it's just that I haven't been able to create a suite that would be relevant and accurate at the same time.

    Also, once we get some NVMe drives in for review, I plan to revisit my real world testing since that presents a chance for greater difference between drives. Right now AHCI and SATA 6Gbps limit the performance because they account for the largest share in latency, which is why you don't really see differences between drives under light workloads as the AHCI and SATA latency absorb any latency advantage that a particular drive provides.
  • AnnonymousCoward - Tuesday, September 30, 2014 - link

    Thanks for explaining The State of SSDs.

    I suspect a lot of people don't realize there's negligible performance difference across SSDs. And I think lots of people put SSDs in RAID0! Reviews I've seen show zero real-world benefit.

    This isn't a criticism, but it's practically misleading for a review to only include graphs with a wide range of performance. What a real-world test does is get us back to reality. I think ideally a review should start with real-world, and all the other stuff almost belongs in an appendix.

    Users should prioritize SSDs with:
    1. Good enough (excellent) performance.
    2. High reliability and data protection.
    3. Low cost.

    If #1 is too easy, then #2 and #3 should get more attention. I generally recommend Intel SSDs because I suspect they have the best reliability standards, but I really don't know, and most people probably also don't. OCZ wouldn't have shipped as many as they did if people were aware of their reliability.
  • leexgx - Saturday, November 1, 2014 - link

    nowadays you cant buy a bad SSD (unless its phison based, they norm make Cheap USB flash pen drives) even JMicron based SSDs are OK now

    its only compatibility problems that make an SSD bad with some setups

    JMicron JMF602 had a Very very very bad SSD controller when they made there first 2 (did i say that to many times) http://www.anandtech.com/show/2614/8 (1 second Write delay)
  • Impulses - Monday, September 29, 2014 - link

    Probably because top tier SSD reached a point a while ago where the differences in performing basic tasks like that is basically milliseconds, which would tell the reader even less.

    For large transfers the sequential tests are wholly representative of the task.

    I think Anand used to have a test in the early days of SSD reviews where he'd time opening five apps right after boot, but it'd basically be a dead heat with any decent drive these days.
  • Gigaplex - Monday, September 29, 2014 - link

    It would tell the reader that any of the drives being tested would fit the bill. Currently, readers might see that drive A is 20% faster than drive B and think that will give 20% better real world performance.

    Both types of tests are useful, doing strictly real-world tests would miss information too.
  • AnnonymousCoward - Tuesday, September 30, 2014 - link

    > is basically milliseconds, which would tell the reader even less.

    Wrong; that tells the reader MORE! If all modern video cards produced within 1fps of each other, would you rather see that, or solely relative performance graphs that show an apparent difference?
  • Wolfpup - Monday, September 29, 2014 - link

    Darn, that's a shame these don't have full data loss protection. I assumed they did too! Still, Micron/Crucial and Intel are my top choices for drives :)
  • Wormstyle - Tuesday, September 30, 2014 - link

    Thanks for posting the information here. I think you are a bit soft on them with the power failure protection marketing, but you did a good job explaining what they were doing and hopefully they will now accurately reflect the capability of the product in their marketing collateral. A lot of people have bought these products with the wrong expectations on power failure, although for most applications they are still very good drives. What is the source for the market data you posted in the article?
  • Kristian Vättö - Tuesday, September 30, 2014 - link

    It's straight from the M500's product page.

    http://www.micron.com/products/solid-state-storage...
  • Wormstyle - Tuesday, September 30, 2014 - link

    The size of the SSD market by OEM, channel, industrial and OEM breakdown of notebook, tablet, and desktop? I'm not seeing it at that link.

Log in

Don't have an account? Sign up now