CPU Tests: Microbenchmarks

Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test built by Andrei, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.

In terms of the core-to-core tests on the Tiger Lake-H 11980HK, it’s best to actually compare results 1:1 alongside the 4-core Tiger Lake design such as the i7-1185G7:

What’s very interesting in these results is that although the new 8-core design features double the cores, representing a larger ring-bus with more ring stops and cache slices, is that the core-to-core latencies are actually lower both in terms of best-case and worst-case results compared to the 4-core Tiger Lake chip.

This is generally a bit perplexing and confusing, generally the one thing to account for such a difference would be either faster CPU frequencies, or a faster clock of lower cycle latency of the L3 and the ring bus. Given that TGL-H comes 8 months after TGL-U, it is plausible that the newer chip has a more matured implementation and Intel would have been able to optimise access latencies.

Due to AMD’s recent shift to a 8-core core complex, Intel no longer has an advantage in core-to-core latencies this generation, and AMD’s more hierarchical cache structure and interconnect fabric is able to showcase better performance.

Cache & DRAM Latency

This is another in-house test built by Andrei, which showcases the access latency at all the points in the cache hierarchy for a single core. We start at 2 KiB, and probe the latency all the way through to 256 MB, which for most CPUs sits inside the DRAM (before you start saying 64-core TR has 256 MB of L3, it’s only 16 MB per core, so at 20 MB you are in DRAM).

Part of this test helps us understand the range of latencies for accessing a given level of cache, but also the transition between the cache levels gives insight into how different parts of the cache microarchitecture work, such as TLBs. As CPU microarchitects look at interesting and novel ways to design caches upon caches inside caches, this basic test proves to be very valuable.

What’s of particular note for TGL-H is the fact that the new higher-end chip does not have support for LPDDR4, instead exclusively relying on DDR4-3200 as on this reference laptop configuration. This does favour the chip in terms of memory latency, which now falls in at a measured 101ns versus 108ns on the reference TGL-U platform we tested last year, but does come at a cost of memory bandwidth, which is now only reaching a theoretical peak of 51.2GB/s instead of 68.2GB/s – even with double the core count.

What’s in favour of the TGL-H system is the increased L3 cache from 12MB to 24MB – this is still 3MB per core slice as on TGL-U, so it does come with the newer L3 design which was introduced in TGL-U. Nevertheless, this fact, we do see some differences in the L3 behaviour; the TGL-H system has slightly higher access latencies at the same test depth than the TGL-U system, even accounting for the fact that the TGL-H CPUs are clocked slightly higher and have better L1 and L2 latencies. This is an interesting contradiction in context of the improved core-to-core latency results we just saw before, which means that for the latter Intel did make some changes to the fabric. Furthermore, we see flatter access latencies across the L3 depth, which isn’t quite how the TGL-U system behaved, meaning Intel definitely has made some changes as to how the L3 is accessed.

Power Consumption - Up to 65W or not? SPEC CPU - Single-Threaded Performance
Comments Locked

229 Comments

View All Comments

  • ozzuneoj86 - Monday, May 17, 2021 - link

    While it is nice that it supports gen 4, realistically you're just getting SSDs that put out more heat, with more power draw, while gaining performance benefits that are only measurable in benchmarks or very specific situations.

    I'm sure file copy performance is much higher, but how fast do you need that to be? Assuming you're copying to the drive itself or maybe to a Thunderbolt 4 external drive, it is the difference between copying 1TB of data in 2 minutes versus 6 minutes. You can (theoretically) completely fill a $400 2TB SSD in 4 minutes with gen4 vs maybe 12 minutes with Gen 3. If someone needs to do that all the time, then sure there's a difference... but that has to be pretty uncommon.

    For smaller amounts of data, any decent nvme drive is fast enough to make the difference between models almost unnoticeable. For the vast majority of users, even a SATA drive is plenty fast enough to provide a smooth and nearly wait-free experience.
  • mode_13h - Monday, May 17, 2021 - link

    > realistically you're just getting SSDs that put out more heat, with more power draw,
    > while gaining performance benefits that are only measurable in benchmarks
    > or very specific situations.

    Exactly. Thank you.
  • mode_13h - Monday, May 17, 2021 - link

    > Assuming you're copying to the drive itself or maybe to a Thunderbolt 4 external drive

    Oops! TB 4 is limited to PCIe 3.0 x4 speeds! So, it'd be little-to-no help there!
  • Calin - Tuesday, May 18, 2021 - link

    Well, you could copy full blast to an external drive and have plenty of remaining performance to do other storage intensive things - that's assuming your external drives is fast enough to suffocate PCIe 3.0 x4, and your internal drive is faster still.
  • mode_13h - Thursday, May 20, 2021 - link

    > Well, you could copy full blast to an external drive and have plenty of remaining performance

    I'm not one to turn down "free" performance, but PCIe 4 uses significantly more power. In a laptop, that's not a minor point.
  • inighthawki - Monday, May 17, 2021 - link

    Sequential read and write speeds are basically just flexing. Very few people actually ever make significant use of such speeds in a way that saves more than a second or two here or there. Most laptop users are not sitting there copying a terabyte of sequential data over and over again.
  • The_Assimilator - Monday, May 17, 2021 - link

    There is no laptop chassis on the market that can adequately handle the excess of 8W of heat that a PCIe 4.0 NVMe SSD can dissipate.
  • Cooe - Monday, May 17, 2021 - link

    You're not getting those kind of speeds sustained in a laptop without RIDICULOUS thermal throttling. PCIe 4.0 in mobile atm is just a marketing checkmark & nothing more.
  • Calin - Tuesday, May 18, 2021 - link

    It allows faster "races to sleep" for the processor. And, since the Core2 architecture, the winning move was "fast and power hungry processor that does what it must and then goes to a very low power state". This gives you very good burst speed and low average power - as soon as you finish, you can throttle everything down (CPU, caches, SSDs, ...)
  • mode_13h - Thursday, May 20, 2021 - link

    > It allows faster "races to sleep" for the processor.

    Are we still talking about PCIe 4? I don't think it works like that.

    > since the Core2 architecture, the winning move was "fast and power hungry processor that does what it must and then goes to a very low power state".

    No, it's more energy-efficient to run at a slower clock speed. There's a huge difference between the amount of energy used in turbo and non-turbo modes. As it's far bigger than the performance difference, there's no way that going to idle a little sooner is going to make up for it.

Log in

Don't have an account? Sign up now