The Server CPU Temperatures

Given Intel's dominance in the server area, we will focus on the Intel Xeons. The "normal", non-low power, Xeons have a specified Tcase of 75°C (167 °F, 95 W) to 88°C (190 °F, 130 W). Tcase is the temperature measurement using a thermocouple embedded in the center of the heat spreader, so there is a lot of temperature headroom. The low power Xeons (70 W TDP or less) have a lot less headroom as the Tcase is a pretty low 65°C (149 °F). But since those Xeons produce a lot less heat, it should be easier to keep them at lower temperatures. In all cases, there is quite a bit of headroom.

But there is more than the CPU of course; the complete server must be up for running with higher temperatures. That is where the ASHRAE specifications come in. The American Society of Heating, Refrigeration, and Air conditioning Engineers publishes guidelines for the temperature and humidity operating ranges of IT equipment. If vendors comply with these guidelines, administrators can be sure that they will not void warranties when running servers at higher temperatures. Most vendors - including HP and DELL - now allow the inlet temperature of a server to be as high as 35 °C, the so called A2 class.

ASHRAE specifications per class

The specified temperature is the so called "dry bulb" temperature, which is the normal measured temperature by a dry thermometer. Humidity should be approximately between 20 and 80%. Specially equipped servers (Class A4) can go as high as 45°C with humidity being between 10 and 90%.

It is hard to overestimate the impact of servers being capable of breathing hotter air. In modern data centers this ability could be the difference between being able to depend on free cooling only, or having to continue to invest in very expensive chilling installations. Being able to use free cooling comes with both OPEX and CAPEX savings. In traditional data centers, this allows administrators to raise the room temperature and decrease the amount of energy the cooling requires.

And last but not least, it increases the time before a complete shutdown is necessary when the cooling installation fails. The more headroom you get, the easier it is to fix the cooling problems before critical temperatures are reached and the reputation of the hosting provider is tarnished. In a modern data center, it is almost the only way to run most of the year with free cooling.

Raising the inlet temperature is not easy when you are providing hosting for many customers (i.e. a "multi-tenant data center"). Most customers resist warmer data centers, with good reason in some cases. We watched a 1U server use 80 Watt to power its fans on a total of less than 200 Watt! In that case, the savings of the data center facility are paid by the energy losses of the IT equipment. It's great for the data center's PUE, but not very compelling for customers.

But how about the latest servers that support much higher inlet temperatures? Supermicro claims their servers can work with up to 47°C inlet temperatures. It's time to do what Anandtech does best and give you facts and figures so you can decide if higher temperatures are viable.

Free Cooling Geography The Supermicro "PUE-Optimized" Server
Comments Locked

48 Comments

View All Comments

  • ShieTar - Tuesday, February 11, 2014 - link

    I think you oversimplify if you just judge the efficiency of the cooling method by the heat capacity of the medium. The medium is not a heat-battery that only absorbs the heat, it is also moved in order to transport energy. And moving air is much easier and much more efficient than moving water.

    So I think in the case of Finland the driving fact is that they will get Air temperatures of up to 30°C in some summers, but the water temperature at the bottom regions of the gulf of Finland stays below 4°C throughout the year. If you would consider a data center near the river Nile, which is usually just 5°C below air temperature, and frequently warmer than the air at night, then your efficiency equation would look entirely different.

    Naturally, building the center in Finland instead of Egypt in the first place is a pretty good decision considering cooling efficiency.
  • icrf - Tuesday, February 11, 2014 - link

    Isn't moving water significantly more efficient than moving air because a significant amount of energy when trying to move air goes to compressing it rather than moving it, where water is largely incompressible?
  • ShieTar - Thursday, February 13, 2014 - link

    For the initial acceleration this might be an effect, though energy used for compression isn't necessary lost, as the pressure difference will decay via motion of the air again (but maybe not in the preferred direction. But if you look into the entire equation for a cooling system, the hard part is not getting the medium accelerated, but to keep it moving against the resistance of the coolers, tubes and radiators. And water has much stronger interactions with any reasonably used material (metal, mostly) than air. And you usually run water through smaller and longer tubes than air, which can quickly be moved from the electronics case to a large air vent. Also the viscosity of water itself is significantly higher than that of air, specifically if we are talking about cool water not to far above the freezing point of water, i.e. 5°C to 10°C.
  • easp - Saturday, February 15, 2014 - link

    Below Mach 0.3, air flows can be treated as incompressible. I doubt bulk movement of air in datacenters hits 200+ Mph
  • juhatus - Tuesday, February 11, 2014 - link

    Sir, I can assure you the Nordic Sea hits ~20°C in the summers. But still that tempereture is good enough for cooling.

    In Helsinki they are now collecting the excess heat from data center to warm up the houses in the city area. So that too should be considered. I think many countries could use some "free" heating.
  • Penti - Tuesday, February 11, 2014 - link

    Surface temp does, but below the surface it's cooler. Even in small lakes and rivers, otherwise our drinking water would be unusable and 25°C out of the tap. You would get legionella and stuff then. In Sweden the water is not allowed to be or not considered to be usable over 20 degrees at the inlet or out of the tap for that matter. Lakes, rivers and oceans could keep 2-15°C at the inlet year around here in Scandinavia if the inlet is appropriately placed. Certainly good enough if you allow temps over the old 20-22°C.
  • Guspaz - Tuesday, February 11, 2014 - link

    OVH's datacentre here in Montreal cools using a centralized watercooling system and relies on convection to remove the heat from the server stacks, IIRC. They claim a PUE of 1.09
  • iwod - Tuesday, February 11, 2014 - link

    Exactly what i was about to post. Why Facebook, Microsoft and even Google didn't manage to outpace them. PUE 1.09 is still as far as i know an Industry record. Correct me if i am wrong.

    I wonder if they could get it down to 1.05
  • Flunk - Tuesday, February 11, 2014 - link

    This entire idea seems so obvious it's surprising they haven't been doing this the whole time. Oh well, it's hard to beat an idea that cheap and efficient.
  • drexnx - Tuesday, February 11, 2014 - link

    there's a lot of work being done on the UPS side of the power consumption coin too - FB uses both Delta DC UPS' that power their equipment directly at DC from the batteries instead of the wasteful invert to 480vac three phase, then rectify again back at the server PSU level, and Eaton equipment with ESS that bypasses the UPS until there's an actual power loss (for about a 10% efficiency pickup when running on mains power)

Log in

Don't have an account? Sign up now