Whole-Drive Fill

This test starts with a freshly-erased drive and fills it with 128kB sequential writes at queue depth 32, recording the write speed for each 1GB segment. This test is not representative of any ordinary client/consumer usage pattern, but it does allow us to observe transitions in the drive's behavior as it fills up. This can allow us to estimate the size of any SLC write cache, and get a sense for how much performance remains on the rare occasions where real-world usage keeps writing data after filling the cache.

The ATSB tests already showed that the TeamGroup L5 LITE 3D doesn't lose much performance when it is full, but actually plotting its performance through the process of filling it up is surprising. The sequential write throughput does drop slightly after about 5GB, but only by 10-15MB/s, and there are no further performance drops for the rest of the fill process. This is a lot more consistent than most drives, and provides more evidence that SLC caches running out aren't a problem for this SSD.

Sustained 128kB Sequential Write (Power Efficiency)
Average Throughput for last 16 GB Overall Average Throughput

 

Working Set Size

When DRAMless SSDs are under consideration, it can be instructive to look at how performance is affected by working set size: how large a portion of the drive is being touched by the test. Drives with full-sized DRAM caches are typically able to maintain about the same random read performance whether reading from a narrow slice of the drive or reading from the whole thing. DRAMless SSDs often show a clear dropoff when the working set size grows too large for the mapping information to be kept in the controller's small on-chip buffers.

As expected, the L5 LITE 3D maintains fairly steady random read performance regardless of working set size. The DRAMless Mushkin Source starts off with significantly lower random read IOPS and declines even more as working set sizes grow to more than a few GB of active data. The three drives here with Phison controllers (one SATA, two NVMe) all show at least some decline in performance with large working set sizes, even though those drives all have the usual 1GB DRAM to 1TB NAND ratio.

Synthetic Benchmarks, Part 1 Mixed Workloads and Power Management
Comments Locked

42 Comments

View All Comments

  • flyingpants265 - Friday, September 20, 2019 - link

    Why promote this drive without mentioning anything about the failure rates? Some Team Group SSDs have 27% 1-star reviews on newegg. That's MUCH higher than other manufacturers.. That's not worth saving $5 at all... Is Anandtech really that tone-deaf now?

    -I would not recommend this drive to others -- 5 months, dead.
    -Not safe for keep your data.Highly recommend not to store any important data on it
    -DO NOT BUY THIS SSD! Total lack of support for defective products! Took days to reply after TWO requests for support, and then I am expected to pay to ship their defective product back when it never worked!?
    -Failed and lost all data after just 6 months.
    ...
  • Ryan Smith - Friday, September 20, 2019 - link

    "Is Anandtech really that tone-deaf now?"

    Definitely not. However there's not much we can say on the subject with any degree of authority. Obviously our test drive hasn't failed, and the drive has survived The Destroyer (which tends to kill obviously faulty drives very early). But that's the limit to what we have data for.

    Otherwise, customer reviews are a bit tricky. They're a biased sample, as very happy and very unhappy people tend to self-report the most. Which doesn't mean what you state is untrue, but it's not something we can corroborate.

    * We've killed a number of SSDs over the years. I don't immediately recall any of them being Team Group
  • eastcoast_pete - Friday, September 20, 2019 - link

    Ryan, I appreciate your response. Question: which SSDs have given up the ghost when challenged by the "destroyer"? Any chance you can name names? Might be interesting for some of us, even in historic context. Thanks!
  • keyserr - Friday, September 20, 2019 - link

    Yes anecdotes are interesting. In an ideal world we would have 1000 drives of each model put through its paces. We don't.

    It's a lesser known brand. It wouldn't make too much sense if they made bad drives in the long term.
  • Billy Tallis - Friday, September 20, 2019 - link

    I don't usually keep track of which test a drive was running when it failed. The Destroyer is by far the longest test in our suite so it catches the blame for a lot of the failures, but sometimes a drive checks out when it's secure erased or even when it's hot-swapped.

    Which brands have experienced a SSD failure during testing is more determined by how many of their drives I test than by their failure rate. All the major brands have contributed to my SSD graveyard at some point: Crucial, Samsung, Intel, Toshiba, SanDisk.
  • eastcoast_pete - Friday, September 20, 2019 - link

    Billy, I appreciate the reply, but would really like to encourage you and your fellow reviewers to "name names". An SSD going kaplonk when stressed is exactly the kind of information that I really want to know. I know that such an occurrence might not be typical for that model, but if the review unit provided by a manufacturer gives out during testing, it doesn't bode well for regular buyers like me.
  • Death666Angel - Friday, September 20, 2019 - link

    You can read every article, I remember a lot of them discussing the death of a sample (Samsung comes to mind). But it really isn't indicative of anything: sample size is crap, early production samples (hardware), early production samples (software). Most SSDs come with 3 years of warranty. Just buy from a reputable retailer, have a brand that actually honors warranty and make sure to back up your data. Then you're fine. If you don't follow those those rules, even using the very limited data Billy could give you won't help you out in any way.
  • eastcoast_pete - Friday, September 20, 2019 - link

    To add: I don't just mean the manufacturers' names, but especially the exact model name, revision and capacity tested. Clearly, a major manufacturer like Samsung or Crucial has a higher likelihood of the occasional bad apple, just due to the sheer number of drives they make. But, even the best big player produces the occasional stinker, and I'd like to know which one it is, so I can avoid it.
  • Kristian Vättö - Saturday, September 21, 2019 - link

    One test sample isn't sufficient to conclude that a certain model is doomed.
  • bananaforscale - Saturday, September 21, 2019 - link

    This. One data point isn't a trend. Hell, several data points aren't a trend if they aren't representative of the whole *and you don't know if they are*.

Log in

Don't have an account? Sign up now