Samsung SSD 840 EVO mSATA (120GB, 250GB, 500GB & 1TB) Review
by Kristian Vättö on January 9, 2014 1:35 PM ESTAnandTech Storage Bench 2013
When Anand built the AnandTech Heavy and Light Storage Bench suites in 2011 he did so because we did not have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.
There were a couple of issues with our 2011 tests that we've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, we've shifted focus from simply triggering GC routines to really looking at worst-case scenario performance after prolonged random IO.
For years we'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up we didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests is that they are very focused on 4KB random writes at high queue depths and full LBA spans–not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.
We needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. We think we have that test. The new benchmark doesn't even have a name, we've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).
Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test–we record all IO requests made to a test system, then play them back on the drive we're measuring and run statistical analysis on the drive's responses.
Imitating most modern benchmarks Anand crafted the Destroyer out of a series of scenarios. For this benchmark we focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:
AnandTech Storage Bench 2013 Preview - The Destroyer | ||||||||||||
Workload | Description | Applications Used | ||||||||||
Photo Sync/Editing | Import images, edit, export | Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox | ||||||||||
Gaming | Download/install games, play games | Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite | ||||||||||
Virtualization | Run/manage VM, use general apps inside VM | VirtualBox | ||||||||||
General Productivity | Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan | Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware | ||||||||||
Video Playback | Copy and watch movies | Windows 8 | ||||||||||
Application Development | Compile projects, check out code, download code samples | Visual Studio 2012 |
While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what we've been calling this test internally:
AnandTech Storage Bench 2013 Preview - The Destroyer, Specs | |||||||||||||
The Destroyer (2013) | Heavy 2011 | ||||||||||||
Reads | 38.83 million | 2.17 million | |||||||||||
Writes | 10.98 million | 1.78 million | |||||||||||
Total IO Operations | 49.8 million | 3.99 million | |||||||||||
Total GB Read | 1583.02 GB | 48.63 GB | |||||||||||
Total GB Written | 875.62 GB | 106.32 GB | |||||||||||
Average Queue Depth | ~5.5 | ~4.6 | |||||||||||
Focus | Worst-case multitasking, IO consistency | Peak IO, basic GC routines |
SSDs have grown in their performance abilities over the years, so we wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When we first introduced the Heavy 2011 test, some drives would take multiple hours to complete it; today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest we've seen it go is 10 hours. Most high performance SSDs we've tested seem to need around 12–13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 we just needed something that had a ton of writes so we could start separating the good from the bad. Now that the drives have matured, we felt a test that was a bit more balanced would be a better idea.
Despite the balance recalibration, there is just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid; they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, we wanted a test that would give us a bit more of what we're interested in these days. As Anand mentioned in the S3700 review, having good worst-case IO performance and consistency matters just as much to client users as it does to enterprise users.
We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.
Update: It appears that something was off in the first run as the 1TB scored 261.52MB/s when I retested it.
I'm not sure if I'm comfortable with the score above. There are no other benchmarks that would indicate the EVO mSATA to be over 20% faster than the 2.5" EVO, so I'm thinking there has been some kind of an error in the test. Unfortunately I didn't have time to rerun the test because The Destroyer takes roughly 12 hours to run and another eight or so hours to be analyzed. However, I managed to run it on the 500GB EVO mSATA and as the graph above shows, its performance is on-par with the 2.5" EVO. I'll rerun the test on the 1TB sample once I get back and will update this based on its output.
65 Comments
View All Comments
Brenderick - Saturday, February 15, 2014 - link
mSata was what the commenter wanted.MoFoQ - Friday, January 10, 2014 - link
same hardware?The mSATA version has 4 NAND packages.
The 2.5" version has 8.
Sure, the number of dies total might be the same.
It's like comparing two houses of the same square-footage....except one of them is a two-story house and the other, a single-story one.
With that said, it is an interesting development for mSATA SSDs.
I can't wait until other manufacturers come to market to help drive the price down.
emvonline - Friday, January 10, 2014 - link
16 die packages are possible at all NAND suppliers and it is relatively straight forward to implement. the issue is usually that there is minimal demand for anything requiring it. What percentage of the market is 1TB? I think we are talking less than 3% above 512G for consumers. let me know if I am wrong.TLC is great for Samsung product margins. so far it hasn't led to a cost decrease for consumers.
Samsung's execution is the amazing part. they have good (or great) products in every market at every density. And they have the most aggressive marketing campaign. No one else has been able to achieve this.
Marrixster - Saturday, January 11, 2014 - link
Thanks very much indeed for this review. I already have 2 Samsung PM851 512GB (MZMTE512HMHP-00000). And, just prior to reading this review ordered the 1TB model (MZ-MTE1T0BW).The price is AUD822.99 (ramcity.com.au), very expensive. However, the convenience of this form factor justifies cost from my point of view.
Now, it's simply a matter of deciding which lappy it gets installed in.
Unit Igor - Saturday, January 11, 2014 - link
Tell me Kristian please would EVO 120GB msata have any advantage over EVO 250gb msata in longer battery life when you compare power consumption vs. disk busy times and mb/s.I use my ultrabook only for mails ,sometimes watching movies and surfing.I dont need more then 120GB SSD but i am willing to buy 250Gb if it would give me more battery life.What i wanted to see in your benchmark is MobileMark 2012 because msata is for laptops and that is where battery life play big role.philipma1957 - Sunday, January 12, 2014 - link
the new gigabyte brix with the i7 4770r cpu 16th ram and a 1tb mSata would be really nice gear.nogoms - Monday, January 13, 2014 - link
You start off with the claim that "Samsung is in a unique position in the SSD market. It’s the only company in the consumer SSD business with a fully vertically integrated business model and zero reliance on other companies."This is, however, not actually true. SK Hynix is in a similar position, as they've had their own controllers, NAND, and DRAM since their purchase of LAMD a year and a half ago. Admittedly, they hadn't actually released a completely in-house SSD using a LAMD controller until late last year with the release of the SH 920 series. In fact, the Surface Pro 2 uses an SK Hynix mSATA SSD (with a LAMD controller), as noted in Anand's review and iFixIt's teardown. SK Hynix also doesn't appear to be making the SH920 series available in retail outside of Asia, though Super Talent's Supernova3 series is available in retail in North America (from the likes of SuperBiiz and various third-party sellers on Amazon and Newegg) and appears to be rebranded 2.5" SK Hynix SH920 series drives.
Kristian Vättö - Monday, January 13, 2014 - link
I did mention SK Hynix and LAMD later in the article:"Sure, Samsung isn't the only NAND manufacturer but it is the only one with a consumer orientated controller IP (although SK Hynix owns LAMD now but that deal has yet to materialize in a product)"
The thing is, while Hynix does have a LAMD based SSD, it's the same controller that other's are using. It's possible that Hynix has contributed to the firmware but it's not a fully in-house designed platform (the controller was designed way before the acquisition took place).
nogoms - Monday, January 13, 2014 - link
That statement is also false, as SK Hynix's LAMD controller *has* materialized in products, as I pointed out in my previous post--they're available in complete systems like the Surface Pro 2 worldwide as well as standalone at retail (in East Asia). Also, it's disingenuous to say it's not fully in-house when the controller and firmware were designed and written by teams currently in the employ of SK Hynix, and even if one were to accept your contention that such an arrangement is not "fully in-house designed," the opening statement of the article does not all of a sudden become true, as Samsung is still not the only manufacturer with "a fully integrated business model" or "zero reliance on other companies."Hrel - Monday, January 13, 2014 - link
any idea when we'll be able to get 512GB drives for under $200?