24 Cores in Action

So how do you test 24 cores? This is not at all a trivial question! Many applications are not capable of using more than eight threads, and quite a few are limited to 16 cores. Just look at what happens if you try to render with Cinema4D on this 24-headed monster: (Click on the image for a clearer view.)


Cinebench fails to use all 24 cores

Yes, only 2/3 of the available processing power is effectively used. If you look closely, you'll see that only 16 cores are working at 100%.

Cinebench 10 64-bit

Cinebench is more than happy with the 3MB L2 cache, so adding a 16MB L3 has no effect whatsoever. The result is that only the improved Penryn core can improve the performance of the X7460. The sixteen 45nm Penryn cores at 2.66GHz are able to keep up with the sixteen 65nm Merom cores at 2.93GHz, but of course that is not good enough to warrant an upgrade. 3ds Max 2008 was no different, and in fact it was even worse:


3dsMax 2008 seems to be limited to 16 cores too

As we had done a lot of benchmarking with 3ds Max 2008, we wanted to see the new Xeon 7460 could do. The scanline renderer is the fastest for our ray-traced images, but it was not able to fully use 16 cores. Like Cinebench, it completely "forgot" to use the eight extra cores that our Xeon 7460 server offers. The results are very low, around 62 frames per hour, while a quad Xeon X7350 can do 88. As we have no explanation for this weird behavior, we didn't graph the results. We will have to take some time to investigate this further.

Even if we could get the rendering engines to work on 24 cores or more, it is clear that there are better ways to get good rendering performance. In most cases, it is much more efficient to simply buy less expensive servers and use Backburner to render several different images on separate servers simultaneously.

Benchmark configuration Intel's Own Benchmarking
Comments Locked

34 Comments

View All Comments

  • JarredWalton - Tuesday, September 23, 2008 - link

    Heh... that's why I love the current IBM commercials.

    "How much will this save us?"
    "It will reduce our power bills by up to 40%."
    "How much did we spend on power?"
    "Millions."
    [Cue happy music....]

    What they neglect to tell you is that in order to achieve the millions of dollars in energy savings, you'll need to spend billions on hardware upgrades first. They also don't tell you whether the new servers are even faster (it's presumed, but that may not be true). Even if your AC costs double the power bills for a server, you're still only looking at something like $800 per year per server, and the server upgrades cost about 20 times as much every three to five years.

    Now, if reduced power requirements on new servers mean you can fit more into your current datacenter, thus avoiding costly expansion or remodeling, that can be a real benefit. There are certainly companies that look at density as the primary consideration. There's a lot more to it than just performance, power, and price. (Support and service comes to mind....)
  • Loknar - Wednesday, September 24, 2008 - link

    Not sure what you mean: "reduced power requirements means you can fit more into your DC". You can fill your slots regardless of power, unless I'm missing something.

    Anyway I agree that power requirement is the last thing we consider when populating our servers. It's good to save the environment, that's all. I don't know about other companies, but for critical servers, we buy the most performing systems, with complete disregard of the price and power consumption; because the cost of DC rental, operation (say, a technician earns more than 2000$ per year, right?) and benefits of performance will outweigh everything. So we're so happy AMD and Intel have such a fruitful competition. (And any respectable IT company is not fooled by IBM's commercial! We only buy OEM (Dell in my case) for their fast 24-hour replacement part service and worry free feeling).
  • JarredWalton - Wednesday, September 24, 2008 - link

    I mean that if your DC has a total power and cooling capacity of say 100,000W, you can "only" fit 2000 500W servers in there, or you could fit 4000 250W servers. If you're renting rack space, this isn't a concern - it's only a concern for the owners of the data center itself.

    I worked at a DC for a while for a huge corporation, and I often laughed (or cried) at some of their decisions. At one point the head IT people put in 20 new servers. Why? Because they wanted to! Two of those went into production after a couple months, and the remainder sat around waiting to be used - plugged in, using power, but doing no actual processing of any data. (They had to use up the budget, naturally. Never mind that the techs working at the DC only got a 3% raise and were earning less than $18 per hour; let's go spend $500K on new servers that we don't need!)

Log in

Don't have an account? Sign up now