Gaming Performance Benchmarks: DDR5-4800

To show the performance of DDR5 memory in different configurations, we've opted for a more selective and short-form selection of benchmarks from our test suite. This includes Civilization VI, Grand Theft Auto V, and Strange Brigade (DirectX 12).

All of the tests were run with all of the memory at default (JEDEC) settings, which means DDR5-4800 CL40, regardless of the configuration, e.g, 2x16, 2x32, and 4x16 GB.

Civilization 6

Originally penned by Sid Meier and his team, the Civilization series of turn-based strategy games are a cult classic, and many an excuse for an all-nighter trying to get Gandhi to declare war on you due to an integer underflow. Truth be told I never actually played the first version, but I have played every edition from the second to the sixth, including the fourth as voiced by the late Leonard Nimoy, and it is a game that is easy to pick up, but hard to master.

Benchmarking Civilization has always been somewhat of an oxymoron – for a turn based strategy game, the frame rate is not necessarily the important thing here and even in the right mood, something as low as 5 frames per second can be enough. With Civilization 6 however, Firaxis went hardcore on visual fidelity, trying to pull you into the game. As a result, Civilization can taxing on graphics and CPUs as we crank up the details, especially in DirectX 12.

Civilization VI - 1080p Max - Average FPS

Civilization VI - 1080p Max - 95th Percentile

Civilization VI - 4K Min - Average FPS

Civilization VI - 4K Min - 95th Percentile

Despite games traditionally being a GPU bottleneck instead of a CPU/memory bottleneck, in our Civ VI testing we do find some small but statistically meaningful differences in our results. The 2 x 32 GB kits were the best of the bunch, with the Samsung 2 x 16 GB kit running slightly slower. The Samsung 4 x 16 GB kit however performed a couple of frames per second slower than the rest, coming in a bit over 3% slower than the 2 x 32 GB Samsung kit.

Grand Theft Auto V

The highly anticipated iteration of the Grand Theft Auto franchise hit the shelves on April 14th 2015, with both AMD and NVIDIA to help optimize the title. At this point GTA V is super old, but still super useful as a benchmark – it is a complicated test with many features that modern titles today still struggle with. With rumors of a GTA 6 on the horizon, I hope Rockstar make that benchmark as easy to use as this one is.

GTA doesn’t provide graphical presets, but opens up the options to users and extends the boundaries by pushing even the hardest systems to the limit using Rockstar’s Advanced Game Engine under DirectX 11. Whether the user is flying high in the mountains with long draw distances or dealing with assorted trash in the city, when cranked up to maximum it creates stunning visuals but hard work for both the CPU and the GPU.

The in-game benchmark consists of five scenarios: four short panning shots with varying lighting and weather effects, and a fifth action sequence that lasts around 90 seconds. We use only the final part of the benchmark, which combines a flight scene in a jet followed by an inner city drive-by through several intersections followed by ramming a tanker that explodes, causing other cars to explode as well. This is a mix of distance rendering followed by a detailed near-rendering action sequence, and the title thankfully spits out frame time data. The benchmark can also be called from the command line, making it very easy to use.

Grand Theft Auto V - 1080p Max - Average FPS

Grand Theft Auto V - 1080p Max - 95th Percentile

Grand Theft Auto V - 4K Low - Average FPS

Grand Theft Auto V - 4K Low - 95th Percentile

Using Grand Theft Auto V's built-in benchmark at 1080p, all of the JEDEC DDR5-4800B kits performed competitively with each other – albeit with a higher degree of variability than usual due to the nature of the game. Still, in our 4K testing, we see that Samsung 4 x 16 GB kit once again brings up the rear, this time falling behind the 2 x 32 GB kit by 7%.

Strange Brigade (DX12)

Strange Brigade is based in 1903’s Egypt and follows a story which is very similar to that of the Mummy film franchise. This particular third-person shooter is developed by Rebellion Developments which is more widely known for games such as the Sniper Elite and Alien vs Predator series. The game follows the hunt for Seteki the Witch Queen who has arisen once again and the only ‘troop’ who can ultimately stop her. Gameplay is cooperative-centric with a wide variety of different levels and many puzzles which need solving by the British colonial Secret Service agents sent to put an end to her reign of barbaric and brutality.

The game supports both the DirectX 12 and Vulkan APIs and houses its own built-in benchmark which offers various options up for customization including textures, anti-aliasing, reflections, draw distance and even allows users to enable or disable motion blur, ambient occlusion and tessellation among others. AMD has boasted previously that Strange Brigade is part of its Vulkan API implementation offering scalability for AMD multi-graphics card configurations. For our testing, we use the DirectX 12 benchmark.

Strange Brigade DX12 - 1080p Ultra - Average FPS

Strange Brigade DX12 - 1080p Ultra - 95th Percentile

Strange Brigade DX12 - 4K Low - Average FPS

Strange Brigade DX12 - 4K Low - 95th Percentile

There wasn't much difference in our testing between the 2 x 32 GB kits in our Strange Brigade Direct X12 testing. At 4K, the Samsung 4 x 16 GB once again performed slightly slower than the rest, although Samsung's 2 x 16 GB configuration performed in-line with the 2 x 32 GB kits.

CPU Performance Benchmarks: DDR5-4800 Conclusion: Two Ranks and 1DPC For Thee
Comments Locked

66 Comments

View All Comments

  • DanNeely - Friday, April 8, 2022 - link

    2DPC is much harder on the signal integrity; and only gets worse the higher the clock rate is. To the extent that several years ago there was some public speculation that DDR5 might not be able to support 2DPC at all.
  • DanNeely - Thursday, April 7, 2022 - link

    Secondary/tertiary timings generally need to be loser in 2DPC mode. The 4 dimm kit almost certainly has them set looser from the factory in XMP. Without using that (or if you combine a pair of 2 dimm kits) I'm not sure if official JDEC timings or the default BIOS behavior adjusts them down automatically or if you end up overclocking your memory trying to hold the tighter values. OTOH in the OCed timing case I'd expect it to either be about as fast as 1DPC or have stability problems (unless you're using high end modules and not taking advantage of the faster XMP settings for some reason).
  • repoman27 - Friday, April 8, 2022 - link

    AFAICT, we don't even know the primary timings here.

    It looks like the Crucial 2x 32GB kit was the only one in the test that had any XMP profiles. But we have no idea if the firmware defaults resulted in those being ignored, used, or possibly even used in conjunction with Intel Dynamic Memory Boost Technology. I believe Alder Lake also has System Agent Geyserville (SAGV), which is another mechanism that could potentially dynamically alter memory frequencies.
  • Ryan Smith - Thursday, April 7, 2022 - link

    Gavin's out for the rest of the day. But once he's back in, we'll check and see if we have logs of those figures so that we can publish exactly what they were.
  • Slash3 - Thursday, April 7, 2022 - link

    That would be perfect. I suspect that the four DIMM kit was being set to looser tertiaries automatically by the BIOS, and it would be interesting to see a full ZenTimings / ASRock Timing Configurator style readout for each kit arrangement. The gap in tested bandwidth seems far too high to be a result of rank variance, even for DDR5.

    The Z690 compatible version of Timing Configurator can be had from the HWBot community page.

    https://community.hwbot.org/topic/209738-z690-bios...
  • alphasquadron - Thursday, April 7, 2022 - link

    I may be wrong but the Grand Theft Auto benchmark titles for 1080p and 4k may need to be reversed as it shows 4k low mode having 80 more fps higher fps 1080p max mode.
  • tomli747 - Thursday, April 7, 2022 - link

    "The R in the 1Rx8 stands for rank, so 1Rx8 means it has one rank with eight memory chips per rank. "
    I thought x8 means each IC correspond to 8 bit of the bus, so 1Rx16 only need 4 ICs to form a 64bit rank.
  • stickdisk - Thursday, April 7, 2022 - link

    Please do overclocking. I understand why you don't think it proper to do so but I really want a trusted source to settle the debate between Samsung and Hynix for DDR5 OC. I have a feeling they are actually the same but the OC community has fallen for the idea that Hynix is better.

    Also please look into clearing up whether higher XMP kits are just price segmentation tactic for memory vendors to make more money or are actually better binned. I'm sure they are better binned at times but I am also fairly confident they charge more for what is just a kit with an OC and not a better bin. This information could help people save some money.

    Getting information of memory IC differences being the biggest indicator of potential memory performance into the mainstream is invaluable. This is important because I see way to many people building $2000+ PCs worrying about CPU OC for up to 5% more performance and not worrying about RAM OC for up to 10% more performance. Mainstream people don't have their priorities straight because mainstream tech influencers don't know better.
  • Oxford Guy - Thursday, April 7, 2022 - link

    The benefit of XMP is that ordinary people don't have to try to do manual RAM overclocking, which is too complicated to be worthwhile for most people — especially on less-expensive boards that simply refuse to post and require guesses and manual CMOS clearing with a screwdriver. If one has the luxury of a fancy board with highly-specific post codes on a display, a CMOS reset button on the back panel (and, optimally, the ability to simply switch to a different profile), and reliable bypass of unstable RAM settings automatically by the board (preventing no-post situations) it might be worth doing for some enthusiasts. I suppose some of the new software configuration programs help to reduce the pain of RAM tinkering. I would rather switch on XMP and that's that, particularly when the RAM vendor promises it will work with the board I'm using at that speed.
  • Oxford Guy - Thursday, April 7, 2022 - link

    With DDR4 there is a difference between a daisy chain layout and a T topology layout, in terms of which layout is optimal with 2 sticks of RAM (daisy) and which is optimal with 4 (T). Does that model continue with DDR5? Which layout does this MSI board use and have you tried the other layout to verify that the board layout is not a factor?

Log in

Don't have an account? Sign up now