Cavium Thunder-X

A few months ago, we talked briefly with the people of Cavium. Cavium is specialized in designing MIPS SoCs that enable intelligent networking, communications, storage, video, and security applications. The picture below sums it all up: present and future.

Cavium's "Thunder Project" started from Cavium's existing Octeon III network SoC, the CN78xx. Cavium's bread and butter has been integrating high speed network capabilities in SoCs, so you will be able to choose between SoCs that have 100 Gbit Ethernet and 10GBit Ethernet. PCI-Express roots and multiple SATA ports are all integrated. There is no doubt that Cavium can design a highly integrated feature-rich SoC, but what about the processing core?

The MIPS cores inside the Octeon are much simpler – dual-issue in-order – but also much smaller and need very little power compared to a typical server core. Four (28nm) MIPS cores can fit in the space of one (32nm) Sandy Bridge core.

Replace the MIPS decoders with ARMv8 decoders and you are almost there. However, while the Cavium Thunder-X is definitely not made to run SAP, server workloads are bit more demanding than network processing, so Cavium needed to beef up the Octeon cores. The new Thunder-X cores are still dual-issue, but they're now out-of-order instead of in-order, and the pipeline length has been increased from eight to nine stages to allow for higher clocks. Each core has a 78KB L1 Instruction cache and a 32KB data cache.

The 37-way 78KB L1 I cache is certainly odd, but it might be more than just "network processor heritage". Our own testing and a few academic studies have shown that scale-out workloads such as memcached have a higher than normal (meaning the typical SPECIntRate2006 characterization) I-cache miss rate. The reason is that these applications run a lot of kernel code, and more specifically the code of the network stack. As a result, the software footprint is much higher than expected.

Another reason why we believe Cavium has done it's homework is the fact that more die area is spent on cores (up to 48) than on large caches; an L3 cache is nowhere to be found. The Thunder-X has only one centralized relatively low latency 16MB L2 cache running at full core speed. A lot of academic studies have confirmed that a large L3 cache is a waste of transistors for scale-out workloads. Besides the most used instructions that reside in the I-cache, there is a huge amount of less frequently used kernel code that does not fit in an L3 cache. In other words, an L3 cache just adds more latency to requests that missed the L1 cache and that will end up in the DRAM anyway. That is also the reason why Cavium made sure that a beefy memory controller is available: the Thunder-X comes with four DDR3/4 72-bit memory controllers and it currently supports the fastest DRAM available for servers: DDR4-2133.

On the flip side, having 48 cores with a relatively small 32KB D-cache that access one centralized 16MB L2 cache also means that the Thunder-X is less suited for some "traditional" server workloads such as SQL databases. So a Thunder-X core is simpler and probably quite a bit weaker than an ARM Cortex-A57 in some ways, let alone an X-Gene core. The fact that the Thunder-X spends a lot less transistors on cache than on cores clearly indicates that it is targeting other workloads. Single-threaded performance is likely to be lower than that of the AMD Seattle and X-Gene, but it could be close enough: the Thunder-X will run at 2.5GHz, courtesy of Global Foundries' 28nm process technology. Cavium is claiming that even the top SKU will keep the TDP below 100W.

There is more. The Thunder-X uses Cavium's proprietary Coherent Processor Interconnect (CCPI) and can thus work in a dual socket NUMA configuration. As a result, a Thunder-X based server can have up to 96 cores and is capable of supporting 1TB of memory, 512GB per socket. Multiple 10/40GBE, PCIe Root Complex, and SATA controllers are integrated in the SoC. Depending on SKU, TCP/IP Sec offload and SSL accelerators are also integrated.

The recent launch of Cavium's Thunder-X SKUs make it clear that Cavium is trying to compete with the venerable Xeon E5 in some niche but large markets:

  1. ThunderX_CP: For cloud compute workloads such as public and private clouds, web caching, web serving, search, and social media data analytics.
  2. ThunderX_ST: For cloud storage, big data, and distributed databases.
  3. TunderX_NT: For telecom/NFV server and embedded networking applications.
  4. ThunderX_SC: For secure computing applications

Considering Cavium's background and expertise, it is pretty obvious that ThunderX_NT and SC should be very capable challengers to the Xeon E5 (and Xeon-D), but only a thorough review will tell how well the ThunderX_CP will do. One of the strongest points of Calxeda was the highly integrated fabric that lowered the total power consumption and network latency of such a server cluster. Just like AMD/Seamicro, Cavium is well positioned to make sure that the Thunder-X based server clusters also have this high level of network/compute integration.

The ARM Based Challengers: AppliedMicro AMD Opteron A1100
Comments Locked

78 Comments

View All Comments

  • patrickjchase - Thursday, December 18, 2014 - link

    It's been a while since I worked on this stuff, but I don't think that the statement that "CCN is very comparable to the ring bus found inside all Xeon processors beginning with Sandy Bridge" is quite right.

    CCN
  • patrickjchase - Thursday, December 18, 2014 - link

    Finishing my comment:

    CCN
  • stefstef - Wednesday, December 17, 2014 - link

    the idea of having an energy efficient design certainly will pay off. nvidia and samsung showed that having i.e. 4 cores and a fifth core dedicated to the energy management can be a good low cost solution. i dont often read the articles at anandtech because they are usually boring. although i am happy to place a coment here. arm rules in certain fields but in a couple of years only because intel will allow them to do so. every company needs a room to live in. another american breakfast for the chinese who will get their share in the processor market as well.
  • milli - Thursday, December 18, 2014 - link

    I don't understand how ARM is suddenly going to succeed while MIPS and PowerPC have already tried and failed. I feel that ARM is more of a market trend than anything else (in the server market).
    Even the current ARM server SOC manufacturers have already tried to penetrate the server market. Cavium and Broadcom already had custom designed low-power MIPS SOCs. IBM, Applied Micro and Freescale have had a bunch of low-power PowerPC options.
    By the time any of these products is released, Intel is going to have a better alternative thanks to their process advantage. No IT manager is going to manage to convince any of the corporate fat-cats that a huge overhaul is needed. Same story over again.
  • yuhong - Friday, December 19, 2014 - link

    "Unfortunately their 16GB DIMMs will only work with the Atom C2000, leading to the weird situation that the Atom C2000 supports more memory than the more powerful Xeon E3."
    I think the reason is software related. More precisely, the Memory Reference Code (MRC).
  • intiims - Tuesday, December 30, 2014 - link

    If You want to know something about External Hard Drives visit http://www.hddmag.com/
  • adrian1987 - Monday, January 5, 2015 - link

    Hi. The Haswell core can actually have a max IPC of 6 instructions per cycle using macro-fusion not 5 as listed here (assuming the code is ideal). It has 2 execution units that can handle fused ALU and branch instructions. Source: http://www.anandtech.com/show/6355/intels-haswell-...
  • aaronjoue - Tuesday, April 7, 2015 - link

    Here is the real micro server. http://www.ambedded.com.tw/pt_list.php?CM_ID=20140...
    http://wiki.ambedded.com.tw/index.php?title=MicroS...
    7 & 21 nodes in a chassis
    It support Ubuntu and open source Ceph.

Log in

Don't have an account? Sign up now