Xeon D vs ThunderX: Supermicro vs Gigabyte

While SoC is literally stands for a system on a chip, in practice it's still just one component of a whole server. A new SoC cannot make it to the market alone; it needs the backing of server vendors to provide the rest of the hardware to go around it and to make it a complete system.

To that end, Gigabyte has adopted Cavium's ThunderX in quite a few different servers. Meanwhile on the Intel side, Supermicro is the company with the widest range of Xeon D products.

There are other server vendors like Pengiun computing and Wistrom that will make use of the ThunderX, and you'll find Xeon D system from over a dozen vendors. But is clear that Gigabyte and Supermicro are the vendors that make the ThunderX and Xeon D available to the widest range of companies respectively.

For today's review we got access to the Gigabyte R120-T30.

Although density is important, we can not say that we are a fan of 1U servers. The small fans in those systems tend to waste a lot of energy.

Eight DIMMs allow the ThunderX SoC to offer up to 512 GB, but realistically 256 GB is probably the maximum practical capacity (8 x 32 GB) in 2016. Still, that is twice as much as the Xeon D, which can be an advantage in caching or big data servers. Of course, Cavium is the intelligent network company, and that is where this server really distinguishes itself. One Quad Small Form-factor Pluggable Plus (QFSP+) link can deliver 40 GB/s, and combined with four 10 Gb/s Small Form-factor Pluggable Plus (SFP+) links, a complete ThunderX system is good for a total of 80 Gbit per second of network bandwidth.

Along with building in an extensive amount of dedicated network I/O, Cavium has also outfit the ThunderX with a large number of SATA host ports, 16 in total. This allows you to use the 3 PCIe 3.0 x8 links for purposes other than storage or network I/O.

That said, the 1U chassis used by the R120-T30 is somewhat at odds with the capabilities of ThunderX here: there are 16 SATA ports, but only 4 hotswap bays are available. Big Data platforms make use of HDFS, and with a typical replication of 3 (each block is copied 3 times) and performance that scales well with the number of disks (and not latency), many people are searching for a system with lots of disk bays.

Finally, we're happy to report that there is no lack of monitoring and remote management capabilities. A Serial port is available for low level debugging and an AST2400 with an out of band gigabit Ethernet port allows you to manage the server from a distance.

The ThunderX SoCs Supermicro and the Xeon D
Comments Locked

82 Comments

View All Comments

  • vivs26 - Wednesday, June 15, 2016 - link

    Not necessarily - (read Amdahl's law of diminishing returns). The performance actually depends on the workload. Having a million cores guarantees nothing in terms of performance unless the workload is parallelizable which in the real world is not as much as we think it could be. I'm curious to see how xeon merged with altera programmable fabric performs than ARM on a server.
  • maxxbot - Wednesday, June 22, 2016 - link

    Technically true but every generation that millstone gets a little smaller, the die area and power needed to translate x86 into uops isn't huge and reduces every generation.
  • jardows2 - Wednesday, June 15, 2016 - link

    Interesting. Faster in a few workloads where heavy use of multi-thread is important, but significantly slower in more single thread workloads. For server use, you don't always want parallelized tasks. The results are pretty much across the board for all the processors tested: If the ThunderX was slower, it was slower than all the Intel chips. If it were faster, it was faster than all but the highest end Intel Chips. With the price only being slightly lower than the cheapest Intel chip being sold, I don't think this is going to be a Xeon competitor at all, but will take a few niche applications where it can do better.

    With no significant energy savings, we should be looking forward to the ThunderX2 to see if it will bring this into a better alternative.
  • ddriver - Wednesday, June 15, 2016 - link

    There is hardly a server workload where you don't get better throughput by throwing more cores and servers at it. Servers are NOT about parallelized task, but about concurrent tasks. That's why while desktops are still stuck at 8 cores, server chips come with 20 and more... Server workloads are usually very simple, it is just that there is a lot of them. They are so simple and take so little time it literally makes no sense parallelizing them.
  • jardows2 - Wednesday, June 15, 2016 - link

    In the scenario you described, the single-thread performance takes on even more importance, thus highlighting the advantage the Xeon's currently have in most server configurations.
  • niva - Wednesday, June 15, 2016 - link

    Not if the Xeon doesn't have enough cores to actually process 40+ singlethreaded tasks con-currently.
  • hechacker1 - Wednesday, June 15, 2016 - link

    But kernels and VMWare know how to schedule multiple threads on 1 core if it's not being fully utilized. Single threaded IPC can make up for not having as many cores. See the iPhone SoCs for another example.
  • ddriver - Wednesday, June 15, 2016 - link

    Not if you have thousands of concurrent workloads and only like 8 cores. As fast as each core might be, the overhead from workload context switching will eat it up.
  • willis936 - Thursday, June 16, 2016 - link

    Yeah if each task is not significantly longer than a context switch. Context switches are very fast, especially with processors with many sets of SMT registers per core.
  • ddriver - Thursday, June 16, 2016 - link

    If what you suggest is correct, then intel would not be investing chip TDP in more cores but higher clocks and better single threaded performance. Clearly this is not the case, as they are pushing 20 cores at the fairly modest 2.4 Ghz.

Log in

Don't have an account? Sign up now