The Server CPU Temperatures

Given Intel's dominance in the server area, we will focus on the Intel Xeons. The "normal", non-low power, Xeons have a specified Tcase of 75°C (167 °F, 95 W) to 88°C (190 °F, 130 W). Tcase is the temperature measurement using a thermocouple embedded in the center of the heat spreader, so there is a lot of temperature headroom. The low power Xeons (70 W TDP or less) have a lot less headroom as the Tcase is a pretty low 65°C (149 °F). But since those Xeons produce a lot less heat, it should be easier to keep them at lower temperatures. In all cases, there is quite a bit of headroom.

But there is more than the CPU of course; the complete server must be up for running with higher temperatures. That is where the ASHRAE specifications come in. The American Society of Heating, Refrigeration, and Air conditioning Engineers publishes guidelines for the temperature and humidity operating ranges of IT equipment. If vendors comply with these guidelines, administrators can be sure that they will not void warranties when running servers at higher temperatures. Most vendors - including HP and DELL - now allow the inlet temperature of a server to be as high as 35 °C, the so called A2 class.

ASHRAE specifications per class

The specified temperature is the so called "dry bulb" temperature, which is the normal measured temperature by a dry thermometer. Humidity should be approximately between 20 and 80%. Specially equipped servers (Class A4) can go as high as 45°C with humidity being between 10 and 90%.

It is hard to overestimate the impact of servers being capable of breathing hotter air. In modern data centers this ability could be the difference between being able to depend on free cooling only, or having to continue to invest in very expensive chilling installations. Being able to use free cooling comes with both OPEX and CAPEX savings. In traditional data centers, this allows administrators to raise the room temperature and decrease the amount of energy the cooling requires.

And last but not least, it increases the time before a complete shutdown is necessary when the cooling installation fails. The more headroom you get, the easier it is to fix the cooling problems before critical temperatures are reached and the reputation of the hosting provider is tarnished. In a modern data center, it is almost the only way to run most of the year with free cooling.

Raising the inlet temperature is not easy when you are providing hosting for many customers (i.e. a "multi-tenant data center"). Most customers resist warmer data centers, with good reason in some cases. We watched a 1U server use 80 Watt to power its fans on a total of less than 200 Watt! In that case, the savings of the data center facility are paid by the energy losses of the IT equipment. It's great for the data center's PUE, but not very compelling for customers.

But how about the latest servers that support much higher inlet temperatures? Supermicro claims their servers can work with up to 47°C inlet temperatures. It's time to do what Anandtech does best and give you facts and figures so you can decide if higher temperatures are viable.

Free Cooling Geography The Supermicro "PUE-Optimized" Server
Comments Locked

48 Comments

View All Comments

  • bobbozzo - Tuesday, February 11, 2014 - link

    "The main energy gobblers are the CRACs"

    Actually, the IT equipment (servers & networking) use more power than the cooling equipment.
    ref: http://www.electronics-cooling.com/2010/12/energy-...
    "The IT equipment usually consumes about 45-55% of the total electricity, and total cooling energy consumption is roughly 30-40% of the total energy use"

    Thanks for the article though.
  • JohanAnandtech - Wednesday, February 12, 2014 - link

    That is the whole point, isn't it? IT equipment uses power to be productive, everything else is supporting the IT equipment and thus overhead that you have to minimize. From the facility power, CRACs are the most important power gobblers.
  • bobbozzo - Tuesday, February 11, 2014 - link

    So, who is volunteering to work in a datacenter with 35-40C cool aisles and 40-45C hot aisles?
  • Thud2 - Wednesday, February 12, 2014 - link

    80,0000, that's sounds like a lot.
  • CharonPDX - Monday, February 17, 2014 - link

    See also Intel's long-term research into it, at their New Mexico data center: http://www.intel.com/content/www/us/en/data-center...
  • puffpio - Tuesday, February 18, 2014 - link

    On the first page you mention "The "single-tenant" data centers of Facebook, Google, Microsoft and Yahoo that use "free cooling" to its full potential are able to achieve an astonishing PUE of 1.15-1."

    This article says that Facebook has a achieved a PUE of 1.07 (https://www.facebook.com/note.php?note_id=10150148...
  • lwatcdr - Thursday, February 20, 2014 - link

    So I wonder when Google will build a data center in say North Dakota. Combine the ample wind power with cold and it looks like a perfect place for a green data center.
  • Kranthi Ranadheer - Monday, April 17, 2017 - link

    Hi Guys,

    Does anyone by chance have a recorded data of Temperature and processor's speed in a server room? Or can someone give me the information about the high-end and low-end values measured in any of the server rooms respectively, considering the equation temperature v/s processor's speed?

Log in

Don't have an account? Sign up now