1st Generation Neoverse-N1 80-Core Server SoC

For readers who are familiar with the Neoverse-N1 and our coverage of the Amazon Graviton2, won’t be too surprised at the general system architecture of the new first generation Altra “Quicksilver” design.

The Quicksilver design features up to 80 Neoverse-N1 cores, integrated within an Arm CMN-600 mesh interconnect that features 32MB of distributed system level cache.

Ampere has equipped the CPU cores themselves with the maximum 64KB L1 caches as well as 1MB of private L2 per core. Although the L3 (the SLC) seems quite reasonable at 32MB, this is actually the same size as the 64-core Graviton2 design, meaning the new Altra system actually gets less cache per core at a system level, which is a bit concerning as we also saw the Graviton2 be quite cache-starved in some workloads. Arm had envisioned Neoverse-N1 systems with either 64MB or even 128MB of L3 cache – it’s a practical compromise in this first-generation product, and we’ll investigate the performance impact later throughout the review.

Beyond the higher core-count, what also stands out for the Altra system in comparison to the Graviton2 are the significantly higher clock frequencies up to 3.3GHz for the top SKU, compared to the 2.5GHz of the Amazon chip – a 32% difference that should lead in a corresponding per-core performance advantage for the Ampere system.

On a system side, the Altra Quicksilver chip features 8 DDR4-3200 memory controllers for a theoretical peak 204GB/s per socket bandwidth.

Ampere achieves dual-socket connectivity through two dedicated PCIe Gen4 x16 links at 25GT/s featuring CCIX protocol compatibility. The bandwidth here is half of a comparable AMD Rome system which features up to 4x x16 Gen4 links between sockets, and it’s also the first time we’ll be seeing CCIX’s cache coherency capabilities used in this way, so that’s definitely a unique design on the part of the Altra system.

Altra QuickSilver SKU List

Ampere had revealed their QuickSilver SKU list earlier this summer, but hadn’t yet published prices for the different configuration, something we can finally reveal today:

Ampere 1st Gen Altra 'QuickSilver'
Product List
AnandTech Cores Frequency TDP PCIe DDR4 Price
Q80-33
(Tested)
80 3.3 GHz 250 W 128x G4 8 x 3200 $4050
Q80-30 80 3.0 GHz 210 W 128x G4 8 x 3200 $3950
Q80-26 80 2.6 GHz 175 W 128x G4 8 x 3200 $3810
Q80-23 80 2.3 GHz 150 W 128x G4 8 x 3200 $3700
Q72-30 72 3.0 GHz 195 W 128x G4 8 x 3200 $3590
Q64-33 64 3.3 GHz 220 W 128x G4 8 x 3200 $3810
Q64-30 64 3.0 GHz 180 W 128x G4 8 x 3200 $3480
Q64-26 64 2.6 GHz 125 W 128x G4 8 x 3200 $3260
Q64-24 64 2.4 GHz 95 W 128x G4 8 x 3200 $3090
Q48-22 48 2.2 GHz 85 W 128x G4 8 x 3200 $2200
Q32-17 32 1.7 GHz 45 W 128x G4 8 x 3200 $800

Ampere here covers a very wide spectrum of SKUs, ranging from today’s tested top-model in the form of the 80-core, 3.3GHz 250W Q80-33, down to “small” low-power 32-core 1.7GHz 45W models such as the Q32-17.

Ampere should be praised for their naming scheme here as it’s the most straightforward of any vendor in the industry, directly showcasing the core number and frequency in the model name, with TDP being really the only characteristic which you’d have to look up.

Across the board, all SKUs feature full 128x lanes of PCIe I/O connectivity, and the full 8-channel DDR4-3200 memory capabilities, capable of hosting up to 4TB of DRAM on all models without any artificial feature limitations.

With today’s reveal of the SKU pricing, we finally can make rough comparisons to Intel’s and AMD’s line-up, and Ampere here is incredibly aggressive in terms of their value proposition as they’re vastly undercutting the competition’s models.

An AMD EPYC 7742 with 64 cores and 225W TDP comes in at $6950, while an Intel Xeon Platinum 8280 with 28 cores and 205W TDP comes in at a price tag of $10009 (It's to be noted that a Xeon Gold 6258R features the same specifications as the 8280 - minus the ability to scale beyond 2 sockets, for only $3950). Ampere’s Q80-33 with 80 cores at a 250W TDP comes a price tag of “only” $4050 seems a steal.

Of course, we’re comparing MSRP to MSRP across vendors – and it’s pretty certain large customers making large order won’t be paying MSRP prices – however if this relative MSRP price positioning between vendors is indicative of pricing for larger orders then Ampere is certain to make a large impact on the enterprise market.

I’m writing this as I know the performance of the new Altra system which we’ll expose in the coming pages – so we’ll revisit whole value proposition argument later on in the article.

About TDPs and Frequencies

One important topic I wanted touch upon was the way Ampere describes TDP and frequencies, as it differs considerably from what the public has gotten used to based on AMD or Intel products.

In regards to the described top frequency of the Altra systems, although Ampere calls this a “sustained turbo”, using the turbo nomenclature at all is probably the wrong way to go about it. The peak frequency of the design is simply that – a peak frequency at which the chip normally operates in 99% of scenarios.

This is in contrast to current x86 designs which have “opportunistic turbo” mechanisms which boost their frequencies beyond a guaranteed “base frequency”. In Ampere’s case, the Altra’s described frequency is essentially the de-facto base frequency even though it still operates a normal DVFS scheme and can clock down below that figure during idle or lower utilisation periods.

Because frequency is essentially fixed under most workloads, what actually fluctuates between different types of workloads is the power consumption of the processor. The figure described as TDP by Ampere here is the maximum peak small-period average power consumption by the processor.

This comes in contrast to the TDP figures of other systems such as AMD’s EPYC and Xeon CPUs, where the TDP can actually interpreted as a pretty accurate average power consumption figure for the vast majority of workloads. If the processor here under load of a workload that doesn’t particularly result in high power consumption, the designs here will simply increase performance and power by increasing the clock frequency of the design.

For example, a low-IPC high-memory workload on an EPYC 7742 will result in low power consumption on the part of the cores, so the chip will clock them up to 3200MHz on all 64 cores to fill the 225W TDP. A high-IPC workload that stresses the cores and result in higher power might end up with an average runtime frequency of 2600MHz across all cores – but in both cases the average power consumption will always settle around the 225W TDP figure.

So, although for example Ampere’s Altra Q80-33 showcases a 250W TDP figure, its power consumption for the majority of workloads almost always averages below that figure, ranging as low as 180W for some low-IPC workloads. I haven’t actually measured a single average figure across all of our workloads, but a rough estimate across the board for the Q80-33 would be 200W.

In our testing with the Mount Jade server which still had preliminary firmware and which initially had an uncapped TDP, I’ve only ever hit one workload (507.cactuBSSN_r) that consistently broke power consumption in excess of 250W, reaching up to 280W. Re-enabling the TDP cap to 250W of course limited it to that figure on small timescale averages – the Altra’s power management works on a 200µs granularity.

Fundamentally, the Altra’s handling of frequency and power in such a manner is simply a by-product of the Neoverse-N1 cores not being able to clock in higher than 3.3GHz, and the cores being so efficient, that they have power leeway in many workloads, while the x86 player’s implementations simply clock in higher when given the opportunity, because they can – and when in power hungry situations, clocking lower, because they have to.

The 2x Q80-33 "Mount Jade" Server Topology, Memory Subsystem & Latency
Comments Locked

148 Comments

View All Comments

  • Brane2 - Saturday, December 19, 2020 - link

    Meh. Nothing special. it has been benchmarked on Phoronix and it performed more or less on par with Rome. 80 newest ARM cores against 64 mature x86 cores within constrained power envelope.
    Naples is just about to come out and I suspect some time after that AMD will have something like really wide new RISC-V cores.
  • Wilco1 - Saturday, December 19, 2020 - link

    It won most benchmarks on Phoronix while using significantly less power. Yes Milan is about to be released, and it will have to compete with the 128-core Altra Max. Which do you believe is going to win - 64 SMT cores or 128 real cores?
  • mode_13h - Sunday, December 20, 2020 - link

    It actually won less than half of the benchmarks on phoronix, since a number of those graphs just re-state the results in score/W. There are also questions over some of the compiler options used on those benchmarks, since many of the tests are compiled with options that won't enable AVX on benchmarks where it should be beneficial (yet, not having SVE, the N1 cores are at no such disadvantage).
  • Wilco1 - Monday, December 21, 2020 - link

    "should be beneficial" -> "might help in a few limited cases". AVX/AVX512 isn't that useful for general C/C++ code. You typically only see large gains when people optimize using intrinsics.
  • mode_13h - Monday, December 21, 2020 - link

    Intrinsics don't compile if they're for a CPU arch beyond what the compiler is being instructed to target. So, even packages where people take the time to optimize with intrinsics need to guard them with compile-time checks to ensure the CPU target is capable of executing those instructions.

    Compilers do generate vectorized code. I don't know how well GCC is doing on that front, lately, but the TNN tests should be a good way to see that. Too bad those tests don't use -march=native.

    What's interesting about TNN is I'm looking at the exact source revision Phoronix is using, and it seems they've completely dropped their backend for x86. The source/tnn/device/x86/ is simply missing. So, I wonder if they decided the compiler was good enough that they didn't need to bother with their own hand-optimized code for it, or if they just decided they don't care how fast their stuff runs on it.

    See:
    * https://openbenchmarking.org/innhold/83a730ed41d4e...
    * https://github.com/Tencent/TNN/tree/v0.2.3
  • Wilco1 - Monday, December 21, 2020 - link

    TNN does not benefit from -march=native. Phoronix uses the generic C++ version which doesn't benefit from vectorization. Try it yourself.

    Optimized versions using intrinsics typically use runtime checks so you automatically get the fastest version that works on your CPU. The makefile selects the right ISA variant for any files using intrinsics. But none of this is used in the TNN test.
  • mode_13h - Monday, December 21, 2020 - link

    > TNN does not benefit from -march=native. Phoronix uses the generic C++ version which doesn't benefit from vectorization. Try it yourself.

    At this point, I probably will.

    > Optimized versions using intrinsics typically use runtime checks so you automatically get the fastest version that works on your CPU.

    That's a whole additional level of effort for the developers. For them to bother compiling and conditionally calling different versions only makes sense if they think their main userbase aren't going to bother recompiling specifically for their hardware. In the case of specialized packages, it's reasonable to expect your users to take a little trouble for the best performance. It's really things like very low-level libs or multimedia code where you tend to see the sort of elaborate runtime detection and dynamic codepath selection that you're describing.
  • mode_13h - Monday, December 21, 2020 - link

    I think Basis Universal and High Performance Conjugate Gradient are some other cases where the wider SIMD of Zen2 and Skylake-SP should confer significant benefit.
  • Wilco1 - Monday, December 21, 2020 - link

    "should give significant benefit" -> "might give some benefit". I suggest you try out. Autovectorization is not nearly as good as you seem to believe, and the overall speedup is often disappointing even if some loops are 10-20x faster.
  • vinayshivakumar - Saturday, December 19, 2020 - link

    I am a bit puzzled why none of these processors support SMT... Can someone shed light on why this is the case ?

Log in

Don't have an account? Sign up now