Conclusion

When selecting a memory kit for a new system, the market is littered with many choices ranging in speed, heatsink design, RGB or no RGB, and capacity. In terms of DDR5 memory, the only platform that can utilize this at the moment is Intel's 12th Gen Core series, with its premier offerings coming in conjunction with the Z690 chipset. This is likely to change later on this year if AMD's Zen 4 architecture launches, but right now, the DDR5 market and the Alder Lake market are one in the same.

For today's article, we focused at looking at the performance differences (or lack thereof) in DDR5 in different rank and DIMM per channel configurations. While these elements are smaller factors in DDR5 performance than frequency and clockspeed, as we have found, they do have a meaningful impact on memory subsystem performance – and thus an impact on overall system performance.

Samsung DDR5-4800B: 1Rx8 (1DPC/2DPC) versus 2Rx8 (1DPC)

In testing Samsung's 2 x 32 GB (2Rx8) kit directly against a 4 x 16 GB (1Rx8) kit, we got some interesting results running at JEDEC speeds with Intel's Core i9-12900K processor. 

WinRAR 5.90 Test, 3477 files, 1.96 GB

Looking at situations where the differences were evident in benchmark results, in our WinRAR 5.90 test, which is very sensitive to memory performance and throughput, the Samsung DDR5-4800 4 x 16 GB kit performed around 9% worse than its higher density 2 x 32 GB counterpart, which is quite a drop. And even in a 1DPC configuration, the 2 x 16 GB kit with its single rank of memory does operate at a deficit versus the dual rank kits. This indicates that using 1DPC yields better performance in memory-sensitive applications than 2DPC. Meanwhile, the Samsung DDR5-4800B 2 x 32 GB configuration performed within a solid margin of error against SK Hynix and Micron kits.

Grand Theft Auto V - 4K Low - Average FPS

It was much the same in some of our game testing, with the Samsung 4 x 16 GB kit being outperformed by the 2 x 32 GB kits, and even the Samsung 2 x 16 GB kit, which is the same single-rank UDIMMs as the 4 x 16 GB combination. While the performance hit was only around 2-3% in Grand Theft Auto V at 4K low settings, in our testing and from a performance point of view, Intel's Alder Lake seems to perform better with two sticks against four sticks of memory.

Throughout most of our testing, it was clear that in most situations, having two higher density 2Rx8 sticks in a 1DPC configuration versus the same capacity in 4 sticks (1Rx8 at 2DPC) is better for overall performance. And even looking at just 1DPC configurations, going dual-rank is still better, though to a smaller degree.

Going under the hood for an explaination for these results, the main reason that 2Rx8 is better than 1Rx8 comes down to how the integrated memory controller can only access one level of rank at a time. So in a dual rank DIMM, Rank Interleaving can be employed, which allows the second rank of memory chips to be ready for immediate access. While the differences are minimal even on a theoretical basis, as we have seen they are not zero: rank interleaving reduces response times in the pipeline refresh cycles, which can mean more performance in latency-sensitive applications, or when an application is going to be able to push DDR5 to its overall bandwidth limits.

Samsung vs SK Hynix vs Micron 32GB DDR5-4800B

Looking at the performance of the 2 x 32 GB kits running at DDR5-4800B from Samsung, SK Hynix, and Micron, the difference was for all practical purposes non-existent. We did not find any meaningful performance difference in our testing, which means that performance isn't a differentiating factor between the three memory manufacturers – at least at JEDEC settings with Alder Lake. Which, given the identical timings and capacities, is not unexpected. This is essentially the null hypothesis of our testing, showcasing that at least from a performance standpoint at fully qualified clockspeeds, there's no innate performance difference from one DRAM manufacturer to another.

Consequently, it's pretty easy here to recommend that if users are planning to build an Intel 12th Gen Core series setup with JEDEC-rated DD5 memory, they should opt for the cheapest option among proven DIMM vendors. For desktop purposes, the DIMMs are functionally equal, and right now DDR5 memory itself is still rather expensive. Although there's much more choice available in stock than there was last year, and it's still a relatively new platform, so that also adds to the cost.

Final Thoughts: 64GB of 2Rx8 with 1DPC is Better Than 64 GB of 1Rx8 with 2DPC

One of the biggest things to note from this article is that there isn't really any difference in performance between Samsung, SK Hynix, or Micron-based 2 x 32 GB DDR5-4800B memory kits. Despite using different memory ICs from each of the vendors, all these kits show that 2Rx8 DDR5 memory performs better than 1Rx8 DDR5.

The only aspect we didn't test was overclocking headroom with the JEDEC-rated kits, which wasn't really an angle we wanted base an article around. Given the lottery-like results of overclocking on any given DIMM, we'd be testing our own luck more than we'd be testing the hardware. In these cases a large sample size is required to get useful data, and that's where the dedicated memory vendors come in with their binning processes.

Taking a more meta overview on the state of the DDR5 market, we already know from vendors such as ADATA, G.Skill, and TeamGroup, that Samsung and SK Hynix's current generation parts show greater frequency and latency headroom when running above DDR5's nominal voltage of 1.1v. Which is why DDR5-6000 (and beyond) kits aren't using Micron chips. Though this may change in the future as all three companies are looking to the future with its manufacturing process, including EUV lithography.


2 x 32 GB kits of DDR5-4800B memory outperform 4 x 16 GB kits at the same frequency/latencies

As for the matter of today's tests, our results are very clear: dual-rank memory is the way to go, as is sticking to a single DIMM per channel when possible.

The most significant performance differences in our testing are found comparing two of Samsung's 1Rx8 DDR5-4800B memory sticks in a 1DPC configuration against four of the same sticks in a 2DPC configuration. There we find that the 1DPC configuration is contently equal or better in every scenario. Using four sticks means data has to travel further along the memory traces, which combined with the overhead of communicating with two DIMMs, results in both a drop in memory performance as well as a sight increase in latency.

And while the differences between 1Rx8 and 2Rx8 are not as large, we find that there is still a difference, and it's in favor of the dual rank memory. Thanks to rank interleaving, single rank memory finds itself at a slight disadvantage versus dual rank memory, at least on today's Alder Lake systems.

Based on this data, we recommend that users looking for 64 GB of DDR5 memory opt for 2 x 32 GB, rather than using a 4 x 16 GB configuration. Besides providing the best performance, the 2 x 32 GB route also leaves room for users to add additional capacity as needed down the line. Plus, if users want to overclock them further, overclocking four sticks of memory is notoriously stressful for the processor's IMC – and DDR5 only makes this worse.

Otherwise, choosing between DDR5-4800B kits from Micron, SK Hynix, and Samsung in terms of 2 x 32 GB kits primarily comes comes down to availability and price. DRAM is a true commodity product, in every sense of the word, so for these JEDEC-standard kits, there's not much to compete on except pricing.

Gaming Performance Benchmarks: DDR5-4800
Comments Locked

66 Comments

View All Comments

  • Oxford Guy - Thursday, April 14, 2022 - link

    Anyone?
  • WhatYaWant - Friday, April 8, 2022 - link

    How would 1x32 compare to 2x16?
  • MDD1963 - Friday, April 8, 2022 - link

    Seems like only a few years ago that even 32 GB was the 'you'll NEVER need that much RAM!' line in the silicon sand, and, I bought it and ...still don't really need it; now it seems 2x 32 GB sticks will be the new norm? :) (Break out the wallet if you need 64 GB of DDR5-6400 for some 'future proofing'! :)
  • Leeea - Friday, April 8, 2022 - link

    Interesting you say that, because it seems like 16 GB is still enough.

    The only advantage seems windows ram caching, which is nice, but not all that noticeable with NVMe SSDs.
  • Icehawk - Sunday, April 10, 2022 - link

    Agreed, 16gb is still plenty for almost all users particularly at home where you won’t be running 23 agents like my work PC (uses ~6gb of Ram at idle!). I went 32 on my current machine because DDR4 was cheap and it was like $100 more but even with lots of multitasking, gaming, encoding I don’t get close to the limits of 16gb let alone 32. Now if I wanted to run some VMs that would be a different story but that’s not what most people do.
  • RSAUser - Tuesday, April 19, 2022 - link

    Define your use-case, average user will still not hit 8GB nowadays, and 16GB will cover 90-95% of users. Those who need more than that you start talking about people running DB, VM, etc., and there it will constantly be whatever the amount of RAM that is affordable/enough gain at that price point. For me right now it's 64GB, but for someone else that can still be 16GB.
  • lightningz71 - Friday, April 8, 2022 - link

    Thank you for this test, I appreciate the work that went into it.

    I see a potential limitation in the applicability of this test to a variety of use cases. Choosing to only do the tests with the 12900K drastically limits the effects of memory latency on most of your tests because the 12900K has a significant amount of L3 cache. That large amount of cache will serve to significantly reduce the penalty of the higher latency of various RAM configurations. I would like to see the same tests run with a 12400 or 12100 to see how the much lower cache levels of those processors affects the test results.
  • The_Assimilator - Friday, April 8, 2022 - link

    Very interesting that none of these DIMMs are half-height, seemingly it's cheaper to use a single full-height PCB even if you only populate half of it.
  • Hamm Burger - Saturday, April 9, 2022 - link

    Belatedly … I think those throughput rates from AIDA should be in GB/s, not MB/s.
  • IBM760XL - Saturday, April 9, 2022 - link

    I believe they are correct; 74 thousand MBps = 74 GBps, which sounds reasonable; 74 TBps would be excessive.

    (Although it would look incorrect if viewed from a country that uses the , as the decimal separator, rather than the thousands separator)

Log in

Don't have an account? Sign up now