Power: Low Power CPUs Compared

First we'll start with the raw power consumption:

Sizing Servers vAPUS Mark I - Power consumption

By dividing the performance scores on the previous page by the measured power consumption at full load, we get the performance per watt. We multiplied by 100 to make the results easier to read, so a score of 100 means that for every vApus Mark I performance point the system consumes one watt.

Performance per Watt at full vApus load

That the X5570 2.93GHz wins the performance/watt race came as a surprising… at first. However, remember that the "leakier" Nehalems go to the desktop as 130W TDP parts, while the server CPUs are the best parts (95W TDP). The top server Xeons are already binned for power consumption. In a way, the 2.93GHz Xeon is already a "lower power" CPU.

The "75W ACP Shanghai" based Opterons disappoint: the difference between them and the 95W TDP Xeon "Nehalem" is small. This is partly a result of the fact that the NVIDIA chipset based motherboards waste quite a bit more power than the Intel platform. Notice that a single Opteron 2435 consumes more than a Xeon X5570; once you add three DIMMs and a second CPU, the tables have turned. So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W). While the Opteron 2435 consumes less power than the X5570, an AMD based server still consumes slightly more than the Intel based server with one socket filled up. This clearly indicates that our AMD server platform is not as efficient as the Intel Server. This is something that AMD's upcoming high-efficiency Fiorano "Kroner" platform should address.

The Xeon X5570 2.93GHz only consumes a few watts more than the much slower quad-core Opterons. We have shown that in native situations, the Xeon X5570 is about 37% to 85% faster than a 2.7GHz quad-core Opteron or about 30% to 78% faster than a 2.9GHz Opteron. In our virtualization scenario, the X5570 is about 30% faster. The business case for the high-clocked quad-core Opteron looks very bleak. The six-core Opteron at 2.6GHz is stronger: 15% higher performance than the 2.9GHz quad-core and it consumes just as much. The Opteron EE performs well, although it fails to defeat the L5520 when it comes to "pure" performance/watt.

Power at full load is only part of the story of course. We are working on a throttled vApus workload, but a good look at the numbers from SPECpower_ssj2008 tells us that the power consumption at 60-90% is not radically different from 100%. That is not really surprising: 100% CPU load does not mean that the CPU is using twice the amount of decoding and executing resources than at 50%. Instead, 100% load means that the (ESX) scheduler never has to run an idle process. That happens 50% of the time when you are running at 50% load. Basically, you get the same resources stressed 50% or 100% of the time. You may expect that the power curve starting from the idle power value to the 100% load power value is more or less linear… if the clock speed and voltage is constant of course. By default, ESX does not change the clock speed (using SpeedStep or PowerNow!) as long as your load does not go under 60%. Whether we test at 60%, 80%, or 100%, the power consumption landscape should stay the same.

Some applications will always run at a relatively high CPU load, for example a web server that is read in Asia, Europe, and America. However, many applications are only used between 9AM and 6PM local time, so a large part of the time the machine might be running close to idle. Before we declare anybody the performance/watt king, we need to take the idle power numbers into account.

Performance: Low Power CPUs Compared Idle Power in ESX: Challenging
Comments Locked

12 Comments

View All Comments

  • Doby - Thursday, July 23, 2009 - link

    I don't understand why virtualization benchmarking is done with 16 or fewer VMs. With the CPU power of the newer CPU you can consolidate far more on there. Why aren't the benchmarks done with VMs with varying workloads, around 5% or less utilization, and then see how many VMs a particular server can handle. It would be far more real world.

    I have customers running over 150 VMs on a 4 CPU box, the performance compison of which CPU can handle 16 VMs better is completely bogus. It's all about how many VMs can I get without overloading the server (60-80% utilization).
  • JohanAnandtech - Thursday, July 23, 2009 - link

    As explained in the article, we were limited with the amount of DDR-3 we have available. We had a total of 48 GB of DDR-3 and had to test up to servers. It should not be too hard to figure out what the power consumption could have been with twice or even four times more memory. Just add 5 Watt per DIMM.

    BTW, 150 VMs on one box is not extremely rare in the realworld. Are those VDI VMs?

    "the performance comparison of which CPU can handle 16 VMs better is completely bogus"

    On a dual socket machine it is not. Why would it be "bogus"? I agree that in a perfect world we would have loaded that machine up to 48 GB per Server (that is a fortune of 192 GB of RAM) and have run like 20-30 VMs per server. A little bit of understanding for the limitations we have to face would make my day....

  • uf - Thursday, July 23, 2009 - link

    What power consumption is for low loaded server (not idle!) say at 10% and 30% average cpu utilization per core?
  • MODEL3 - Wednesday, July 22, 2009 - link

    in your comment:
    If AMD would apply the methodology of Intel to determine TDP they would end up somewhere between ACP and the current "AMD TDP"

    You referring exclusively to the server CPUs?
    Because if not, the above statement is false and unprofessional.

    I don't have access to server CPUs but from my experience with mainstream consumer CPUs tells me the exact opposite:

    65nm dual core (same performance level) 65W max TDP:
    both 6420 (2,13GHz)& 4600 (2,4GHz) has lower* actual TDP than 5600 (2,9Ghz)

    45nm dual core (same performance level) 65W max TDP:
    both 7200 (2,53GHz)& 6300 (2,8GHz) has lower* actual TDP than Athlon 250 (3,0Ghz)

    45nm Quad core (same performance level) 65W max TDP:
    Q8200S (2,33 GHz) has lower* actual TDP than Phenom II 905e (2.5GHz)

    I don't even need to give details for system configurations everyone knows these facts.

    * not by much but nevertheless lower (so from that point to the point of " AMD has actual TDP somewhere between AMD's ACP and Intel's TDP " there is a huge gap
  • JohanAnandtech - Wednesday, July 22, 2009 - link

    Correct. I only checked for server CPUs (see the pdf I linked).
  • JarredWalton - Wednesday, July 22, 2009 - link

    There are several issues at work, particularly with desktop processors. For one, AMD and Intel both have a range of voltages on desktop parts, so (just throwing out numbers) one CPU might run at 1.2V and another with the same part might run at 1.225V - it's a small difference but it can show up.

    Next, Intel and AMD both seem to put out numbers that are a theoretical worst case, and clock speed and voltage of a given chip help determine where the CPUs actually fall. The stated TDP on a part might be 65W, and with some 65W chips you can get very close to that while with others you might never get above 50W, but they'll both still state 65W.

    The main point is that AMD's ACP ends up lower than what is realistic and their TDP ends up as essentially the worst-case scenario. (AMD parts are marketed with the ACP number, not TDP.) Meanwhile, Intel's TDP is higher than AMD's ACP but isn't quite the worst-case scenario of AMD's TDP.

    I believe that's the way it all works out: Intel reports TDP that is lower than the absolute maximum but is typically higher than most users will see. AMD reports ACP that is more like an "average power" instead of a realistic maximum, but their TDP is pretty accurate. Even with this being the general case, processors are still released in families and individual chips can have much lower power requirements than the stated ACP/TDP - basically they should always come in equal to or lower than the ACP/TDP, but one might be 2W lower and another might be 15W lower and there's no easy way to say which it is without testing.
  • MODEL3 - Wednesday, July 22, 2009 - link

    I mostly agree with what you 're saying except 2 things:

    1.AMD's TDP ends up as essentially the worst-case scenario (not true in all the cases e.g. Phenom X4 9350e (it has actual TDP higher than 65W)

    2.In all the examples I gave, Intel & AMD had the same "official" TDP (also same more or less performance & same manufacturing proccess) so with your logic AMD should have lower than Intel actual TDP which is not true.

    I live in Greece, here we pay 0,13€ (inc. VAT) per KW, so...

    In another topic did you see the new prices for AMD Athlon II X2 245 (66$) & 240 (60$)? (while Intel 5300 cost 64$ & 5400 74$)

    They should have priced them at 69$ & 78$.

    No wonder why AMD is loosing so much money, they have to fire immediately those idiots who dit it (it reminds me the days before K8 when AMD used these methods)
  • JPForums - Wednesday, July 22, 2009 - link

    I'm having a hard time correlating your chart and your assessment.

    "Notice how adding a second L5520 CPU and three DIMMs of DDR3-1066 to our Chenbro server only adds 9W."
    Found that one. However, on the previous page you make this statement:
    "So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W)."
    This implies to me that just adding the 3 DIMMs should have raised the power 15W.

    "Add an Opteron EE to our AMD server and you add 22W."
    Check. Did you add the 3 DIMMs here as well?

    "The result is that the best AMD platform consumes about 20W more than the best Intel platform when running idle."
    Can't find this one. There are 3W difference between the Xeon L5520 and the Opteron 2377 EE. There are 16W difference for the dual CPU counter parts (closer). All the other comparisons leave the Intel platform consuming more power than the AMD counterpart. Is this supposed a comparison of the platform without the CPU? It is unclear to me given the words chosen. I was under the impression that the CPU is generally considered part of the platform.

    "Intel's power gating is the decisive advantage here: it can turn the inactive cores completely off. Another indication is that the dual Opteron 2435 consumes about 156W when we turn off dynamic power management, which is higher than the Xeon X5570 (150W)."
    An explanation of dynamic power management would be helpful. It sounds like you're saying that Intel's power management techniques are clearly better because when you turn both their power management and AMD's power management off, the Intel platform works better. The only way your statements make sense is if the dynamic power management you are talking about isn't a CPU level feature like clock gating. In any case, power management techniques are worthless if you can't use them.

    As a side question, when the power management support issue with the Xeon X5570 is addressed and AMD has a new lower power platform, where do you predict the power numbers will end up? I'd still expect the "Nahalem" Xeons to win in performance/power, though.
  • JohanAnandtech - Wednesday, July 22, 2009 - link

    Part 2 :-)

    "The result is that the best AMD platform consumes about 20W more than the best Intel platform when running idle."
    Can't find this one. "

    135W - 119W = 16W. I made a small error there (spreadsheet error).

    "It sounds like you're saying that Intel's power management techniques are clearly better because when you turn both their power management and AMD's power management off, the Intel platform works better. "

    More or less. There are two ways the CPU can save power: 1) lower voltage and clockspeed or 2) Shut down the cores that you don't need. In case of the Intel part, it is better at shutting down the cores that it don't need. They simply are completely shut off and consume close to 0 W. In case of AMD, each core still consumes a few watt.

    So if you turn Speedstep and power now! off, you can see the effect of the 2nd way to save power. It confirms our suspicion of why the Opteron EE is not able to beat the L5520 when running idle.




  • JohanAnandtech - Wednesday, July 22, 2009 - link

    I'll chop my answers up to keep these comments readable.

    quote:
    "Notice how adding a second L5520 CPU and three DIMMs of DDR3-1066 to our Chenbro server only adds 9W."
    Found that one. However, on the previous page you make this statement:
    "So adding a Xeon X5570 adds about 58W (248W - 175W - three DIMMs of 5W), while adding an Opteron 2435 2.6GHz adds about 47W (243 - 181 - three DIMMs of 5W)."
    This implies to me that just adding the 3 DIMMs should have raised the power 15W. "

    No. Because the 9W is measured at idle. It is too small to measure accurately, but DIMMs do not consume 5W per DIMM in idle. Probably more like 1W or so.


Log in

Don't have an account? Sign up now