Intel’s Prebuilt Test System: A $7000 Build

How we receive test units for review has varied greatly over the years. The company providing the review sample has a range of choices and hands-on solutions.

For a regular run of the mill launch, such as Kaby Lake/Coffee Lake/Coffee Lake gen 2, which are second generation launches on the same mature platform as the last generation, we get just the CPU and a set of ‘expected test result notes’ to help guide our testing. The reviewers are expected to know how to use everything and the vendor has confidence in the reviewers analysis. This method allows for the widest range of sampling and the least work at the vendor level, although relies on the journalist to have the relevant contacts with motherboard and memory companies as well as the ability to apply firmware updates as needed.

For important new launches, such as Ryzen and AM4, or Threadripper and TR4, or Skylake-X and X299, the vendor supplied the CPU(s), a motherboard, a memory kit, and a suitable CPU cooler. Sometimes there’s a bit of paper from the FAE tester that confirmed the set worked together over some basic stress tests, but it puts less work in the hands of the reviewer knowing that none of the kit should be dead on arrival and it should at least get to the OS without issue.

For unique launches, where only a few samples are being distributed, or there is limited mix-and-match support ready for day one, the option is the full system sample. This means case, motherboard, CPU, CPU cooler, memory, power supply, graphics card, and storage are all shipped as one, sometimes directly from a system integrator partner, but with the idea that the system has been pre-built, pre-tested, and ready to go. This should give the reviewer the least amount of work to do (in practice it’s usually it’s the opposite), but it puts a lot of emphasis on the vendor to plan ahead, and limits the scope of sampling. It also the most expensive for the vendor to implement, but usually the tradeoff is perceived as worth it.

Usually we deal with options one or two for every modern platform to date. Option three is only ever taken if the CPU vendor aims to sell the processor to OEMs and system integrators (SI) only. This is what Intel has done with the Xeon W-3175X, however they built the systems internally rather than outsourcing. After dispatch from the US to the UK, via the Netherlands, an 80 lb (36 kg) box arrived on my doorstep.

This box was huge. I mean, I know the motherboard is huge, I’ve seen it in the flesh several times, but Intel also went and super-sized the system too. This box was 33 inches tall (84 cm), and inside that was a set of polystyrene spacers for the actual box for the case, which again also had polystyrene spacers. Double spacey.

Apologies for taking these photos in my kitchen – it is literally the only room in my flat in which I had enough space to unbox this thing. Summer wanted to help, and got quite vocal.

The case being used is the Anidees AI Crystal XL AR, listed on the company’s website as ‘all the space you need for your large and heavy loaded components’, including support for HPTX, XL-ATX, E-ATX, and EEB sized motherboards, along with a 480mm radiator on top and a 360mm radiator on front, and comes with five 120mm RGB fans as standard. It’s a beast, surrounded with 5mm tempered glass on every side that needs it.

The case IO has a fan control switch (didn’t work), two audio jacks, an LED power button, a smaller LED reset button, two USB 3.0 Type-A ports, and two USB 2.0 Type-A ports. These were flush against the design making for a very straight edged design.

This picture might show you how tall it is. Someone at Intel didn’t install the rear IO plate leaving an air gap, but actually the system airflow was designed for the rear of the chassis to be the intake and the front of the chassis to be the exhaust. There are 10 PCIe slot gaps here, along with two vertical ones for users that want to mount in that way. There is sufficient ‘case bezel’ on all sides, unlike some smaller cases that minimize this.

Users may note the power supply has an odd connector. This is a C19 connecter usually used for high-wattage power supplies, and strapped to the box Intel had supplied a power cable.

This bad boy is thick. Ignoring the fact that this is a US cable and the earth pin is huge to the extent that it would only fit in one of my adaptors and even nudging the cable caused the machine to restart so I had to buy a UK cable that worked great, this unit is designed for the low voltage US market it seems. It has to be able to deliver up to 13A of current on a 120V line, or potentially more, so is built as such. With this it is obviously recommended that no socket extenders are used and this goes directly into the wall.


About to take the side panels off. This little one wants to play.

Both of the tempered glass side panels are held on by nine thumb screws each, which sit on rubber stands on the inside of the case. Unscrewing these was easy enough to do, however it’s one of the slowest ways to open a case I’ve ever come across.

Now inside the system at hand. The LGA3647 socket holds the Xeon W-3175X processor, which is capped with an Asetek 690LX-PN liquid cooler specifically designed for the workstation market. This goes to a 360mm liquid cooling radiator, paired with three high power (I’m pretty sure they’re Delta) fans that sound like a jet engine above 55ºC.

Intel half populated the memory with 8GB Samsung DDR-2666 RDIMMs, making for a total of 48 GB of memory, which is likely going to be the lowest configuration one of these CPUs will ever be paired with. The graphics card is a GIGABYTE GTX 1080, specifically the GV-N1080TTOC-8GD, which requires one 8-pin power connector.

For the motherboard, the ASUS Dominus Extreme, we’ve detailed it in previous coverage, however it’s worth to note that the big thing at the top of this motherboard is actually the heatsink for the 32-phase VRM. It’s a beast. Here is an ASUS build using this motherboard with a liquid cooler on the CPU and VRM:


The build at ASUS’ suite at CES 2019

There’s a little OLED display to the left, which as a full color display useful for showing BIOS codes and CPU temperatures when in Windows. When the system is off, it goes through a short 15 second cycle with the logo:

I’m pretty sure users can put their own gifs (perhaps within some limits) on the display during usual run time using ASUS software.

The rear of the case is quite neat, showing part of the back of the motherboard and the fan controller. At the bottom we have an EVGA 1600W T2 80PLUS Titanium power supply, which is appropriate for this build. Unfortunately Intel only supplied the cables that they actually used with the system, making it difficult to expand to multiple GPUs, which is what a system like this would ultimately end up with.

For storage, Intel provided an Optane 905P 480GB U.2 drive, which unfortunately had so many issues with the default OS installation (and then failing my own OS installation) that I had to remove it and debug it another day. Instead I put in my own Crucial MX200 1TB SATA SSD which we normally use for CPU testing and installed the OS directly on that. ASUS has a feature in the BIOS that will automatically push a software install to initiate driver updates without the need for a driver DVD – this ended up being very helpful.

Overall, the system cost is probably on the order of $7000:

Intel Reference System
  Item List Price
CPU Intel Xeon W-3175X $2999
CPU Cooler Asetek 690LX-PN $260
Motherboard ASUS Dominus Extreme $1500 ?
Memory 6 x 8GB Samsung DDR4-2666 RDIMM $420
Storage Intel Optane 905P 480 GB U.2 $552
Video Card GIGABYTE GTX 1080 OC 8GB $550
Chassis Anidees AI Crystal XL AR $300
Power Supply EVGA 1600W T2 Titanium $357
Total $6938

However, this is with a minimum amount of memory, only one GTX 1080, and a mid-sized U.2 drive. If we add in liquid cooling, a pair of RTX 2080 Ti graphics cards, 12x16GB of DDR4, and some proper storage, the price could easily creep over $10k-$12k, then add on the system builder additions. The version of this system we saw at the Digital Storm booth at CES, the Corsa, was around $20k.

Intel Xeon W-3175X Detailed W-3175X Power Consumption and Overclocking
Comments Locked

136 Comments

View All Comments

  • SaturnusDK - Wednesday, January 30, 2019 - link

    The price is the only big surprise here. At $3000 for the CPU alone and three times that in system price it's actually pretty decently priced. The performance is as expected but it will soon be eclipsed. The only question is what price AMD will change for it's coming Zen2 based processors in the same performance bracket, we won't know until then if the W3175X is a worthwhile investment.
  • HStewart - Wednesday, January 30, 2019 - link

    I thought the rumors were that this chip was going to be $8000. I am curious what Covey version of this chip will perform and when it comes out.

    But lets be honest, unless you are extremely rich or crazy, buying any processor with large amount of cores is crazy - to me it seems like high end gaming market is being taking for ride with all this core war - buy high end core now just to say you have highest performance and then next year purchase a new one. Of course there is all the ridicules process stuff. It just interesting to find a 28 core beats a AMD 32 core with Skylake and 14nm on Intel.

    As for Server side, I would think it more cost effective to blade multiple lower core units than less higher core units.
  • jakmak - Wednesday, January 30, 2019 - link

    Its not really surprising to see an 28 Intel beating an 32Core AMD. After all, it is not a hidden mystery that the Intel chips not only have a small IPC advantage, but also are able to run with a higher clockrate (nevertheless the power wattage). In this case, the Xeon-W excells where these 2 advantages combined are working 28x, so the 2 more cores on AMD side wont cut it.
    It is also obvious that the massive advantage works mostly in those cases where clock rate is the most important part.
  • MattZN - Wednesday, January 30, 2019 - link

    Well, it depends on whether you care about power consumption or not, jakmak. Traditionally the consumer space hasn't cared so much, but its a bit of a different story when whole-system power consumption starts reaching for the sky. And its definitely reaching for sky with this part.

    The stock intel part burns 312W on the Blender benchmark while the stock threadripper 2990WX burns 190W. The OC'd Intel part burns 672W (that's right, 672W without a GPU) while the OCd 2990WX burns 432W.

    Now I don't know about you guys, but that kind of power dissipation in such a small area is not something I'm willing to put inside my house unless I'm physically there watching over it the whole time. Hell, I don't even trust my TR system's 330W consumption (at the wall) for continuous operation when some of the batches take several days to run. I run it capped at 250W.

    And... I pay for the electricity I use. Its not cheap to run machines far away from their maximally efficient point on the curve. Commercial machines have lower clocks for good reason.

    -Matt
  • joelypolly - Wednesday, January 30, 2019 - link

    Do you not have a hair dryer or vacuum or oil heater? They can all push up to 1800W or more
  • evolucion8 - Wednesday, January 30, 2019 - link

    That is a terrible example if you ask me.
  • ddelrio - Wednesday, January 30, 2019 - link

    lol How long do you keep your hair dryer going for?
  • philehidiot - Thursday, January 31, 2019 - link

    Anything up to one hour. I need to look pretty for my processor.
  • MattZN - Wednesday, January 30, 2019 - link

    Heh. That's is a pretty bad example. People don't leave their hair dryers turned on 24x7, nor floor heaters (I suppose, unless its winter). Big, big difference.

    Regardless, a home user is not likely to see a large bill unless they are doing something really stupid like crypto-mining. There is a fairly large distinction between the typical home-use of a computer vs a beefy server like the one being reviewed here, let alone a big difference between a home user, a small business environment (such as popular youtube tech channels), and a commercial setting.

    If we just use an average electricity cost of around $0.20/kWh (actual cost depends on where you live and the time of day and can range from $0.08/kWh to $0.40/kWh or so)... but lets just $0.20/kWh.

    For a gamer who is spending 4 hours a day burning 300W the cost of operation winds up being around $7/month. Not too bad. Your average gamer isn't going to break the bank, so to speak. Mom and Dad probably won't even notice the additional cost. If you live in cold environment, your floor heater will indeed cost more money to operate.

    If you are a solo content creator you might be spending 8 to 12 hours a day in front of the computer. For the sake of argument, running blender or encoding jobs in the background. 12 hours of computer use a day @ 300W costs around $22/month.

    If you are GN or Linus or some other popular YouTube site and you are running half a dozen servers 24x7 plus workstations for employees plus running numerous batch encoding jobs on top of that, the cost will begin to become very noticable. Now you are burning, say, 2000W 24x7 (pie in the sky rough average), costing around $290/month ($3480/year). That content needs to be making you money.

    A small business or commercial setting can wind up spending a lot of money on energy if no care at all is taken with regards to power consumption. There are numerous knock-on costs, such as A/C in the summer which has to take away all the equipment heat on top of everything else. If A/C is needed (in addition to human A/C needs), the cost is doubled. If you are renting colocation space then energy is the #1 cost and network bandwidth is the #2 cost. If you are using the cloud then everything has bloated costs (cpu, network, storage, and power).

    In anycase, this runs the gamut. You start to notice these things when you are the one paying the bills. So, yes, Intel is kinda playing with fire here trying to promote this monster. Gaming rigs that aren't used 24x7 can get away with high burns but once you are no longer a kid in a room playing a game these costs can start to matter. As machine requirements grow then running the machines closer to their maximum point of efficiency (which is at far lower frequencies) begins to trump other considerations.

    If that weren't enough, there is also the lifespan of the equipment to consider. A $7000 machine that remains relevant for only one year and has as $3000/year electricity bill is a big cost compared to a $3000 machine that is almost as fast and only has $1500/year electricity bill. Or a $2000 machine. Or a $1000 machine. One has to weigh convenience of use against the total cost of ownership.

    When a person is cognizant of the costs then there is much less of an incentive to O.C. the machines, or even run them at stock. One starts to run them like real servers... at lower frequencies to hit the maximum efficiency sweet spot. Once a person begins to think in these terms, buying something like this Xeon is an obvious and egregious waste of money.

    -Matt
  • 808Hilo - Thursday, January 31, 2019 - link

    Most servers run at idle speed. That is a sad fact. The sadder fact is that they have no discernible effect on business processes because they are in fact projected and run by people in a corp that have a negative cost to benefit ratio. Most important apps still run on legacy mainframe or mini computers. You know the one that keep the electricity flowing, planes up, ticketing, aisles restocked, powerplants from exploding, ICBM tracking. Only social constructivists need an overclocked server. Porn, youtubers, traders, datacollectors comes to mind. Not making much sense.

Log in

Don't have an account? Sign up now