Explaining the Jump to Using HCC Silicon

When Intel makes its enterprise processors, it has historically produced three silicon designs:

  • LCC: Low Core Count
  • HCC: High Core Count (sometimes called MCC)
  • XCC: Extreme Core Count (sometimes called HCC, to confuse)

The idea is that moving from LCC to XCC, the silicon will contain more cores (sometimes more features), and it becomes cost effective to have three different designs rather than one big one and disable parts to meet the range. The size of the LCC silicon is significantly smaller than the XCC silicon, allowing Intel to extract a better production cost per silicon die.

Skylake-SP Die Sizes (from chip-architect.com)
  Arrangement Dimensions
(mm)
Die Area
(mm2)
LCC 3x4 (10-core) 14.3 x 22.4 322 mm2
HCC 4x5 (18-core) 21.6 x 22.4 484 mm2
XCC 5x6 (28-core) 21.6 x 32.3 698 mm2

In the enterprise space, Intel has each of the three designs throughout its Xeon processor stack, ranging from four-core parts (usually cut down versions of the LCC silicon) all the way up to 28 core parts (using XCC) for this generation. The enterprise platform has more memory channels, support for error correcting and high-density memory, the ability to communicate to multiple processors, and several other RAS (reliability, accessibility, serviceability) features that are prominent for these markets. These are typically disabled for the prosumer platform.

In the past, Intel has only translated the LCC silicon into the prosumer platform. This was driven for a number of reasons.

  • Cost: if users needed XCC, they had to pay the extra and Intel would not lose high-end sales.
  • Software: Enterprise software is highly optimized for the core count, and systems are built especially for the customer. Prosumer software has to work on all platforms, and is typically not so multi-threaded.
  • Performance: Large, multi-core silicon often runs at a low frequency to compensate. This can be suitable for an enterprise environment, but a prosumer environment requires responsiveness and users expect a good interactive experience.
  • Platform Integration: Some large silicon might have additional design rules above and beyond the smaller silicon support, typically with power or features. In order to support this, a prosumer platform would require additional engineering/cost or lose flexibility.

So what changed at Intel in order to bring HCC silicon to the HEDT prosumer platform?

The short and shrift answer that many point to is AMD. This year AMD launched its own high-end desktop platform, based on its Ryzen Threadripper processors. With their new high performance core, putting up to 16 of them in a processor for $999 was somewhat unexpected, especially with the processor beating Intel’s top prosumer processors in some (not all) of the key industry benchmarks. The cynical might suggest that Intel had to move to the HCC strategy in order to stay at the top, even if their best processor will cost twice that of AMD.

Of course, transitioning a processor from the enterprise stack to the prosumer platform is not an overnight process, and many analysts have noted that it is likely that Intel has considered this option for several generations: testing it internally at least and looking at the market to decide when (or if) it is a good time to do so. The same analysts point to Intel’s initial lack of specifications aside from core count when these processors were first announced several months ago: specifications that would have historically been narrowed down at that point in previous designs if they were in the original plans. It is likely that the feasibly of introducing the HCC silicon was already in process, but actually moving that silicon to retail was a late addition to counter a threat to Intel’s top spot. That being said, to say Intel had never considered it would perhaps be a jump too far.

The question now becomes if the four areas listed above would all be suitable for prosumers and HEDT users:

  • Cost: Moving the 18-core part into the $1999 is unprecedented for a consumer processor, so it will be interesting to see what the uptake will be. This does cut into Intel’s professional product line, where the equivalent processor is nearer $3500, but there are enough ‘cuts’ on the prosumer part for Intel to justify the difference: memory channels (4 vs 6), multi-processor support (1 vs 4), and ECC/RDIMM support (no vs yes). What the consumer platform does get in kind is overclocking support, which the enterprise platform does not.
  • Software: Intel introduced its concept of ‘mega-tasking’ with the last generation HEDT platform, designed to encompass users and prosumers that use multiple software packages at once: encoding, streaming, content creation, emulation etc. Its argument now is that even if software cannot fully scale beyond a few cores, a user can either run multiple instances or several different software packages simultaneously without any slow-down. So the solution to this is rather a redefinition of the problem rather than anything else, which could have applied previously as well.
  • Performance: Unlike enterprise processors, Intel is pushing the frequency on the new HCC parts for consumers. This translates into a slightly lower base frequency but a much higher turbo frequency, along with support for Turbo Max. In essence, software that requires responsiveness can still take advantage of the high frequency turbo modes, as long as the software is running solo. The disadvantage is going to be in power consumption, which is a topic later in the review.
  • Platform Integration: Intel ‘solved’ this by creating one consumer platform suitable for nine processors with three different designs (Kaby Lake-X, Skylake-X LCC and Skylake-X HCC). Both the Kaby Lake-X and Skylake-X parts have different power delivery methods, support different numbers of memory channels, and different numbers of PCIe lanes / IO. When this was first announced, there was substantial commentary that this was making the platform overly complex, and would lead to confusion (it lead to at least one broken processor in our testing).

Each of these areas has either been marked as solved, or redefined out of being issue (even if a user agrees with the redefinition or not). 

New Features in Skylake-X: Cache, Mesh, and AVX-512 Opinion: Why Counting ‘Platform’ PCIe Lanes (and using it in Marketing) Is Absurd
Comments Locked

152 Comments

View All Comments

  • mapesdhs - Tuesday, September 26, 2017 - link

    In that case, using Intel's MO, TR would have 68. What Intel is doing here is very misleading.
  • iwod - Monday, September 25, 2017 - link

    If we factor in the price of the whole system, rather then just CPU, ( AMD's MB tends to be cheaper ), then AMD is doing pretty well here. I am looking forward to next years 12nm Zen+.
  • peevee - Monday, September 25, 2017 - link

    From the whole line, only 7820X makes sense from price/performance standpoint.
  • boogerlad - Monday, September 25, 2017 - link

    Can an IPC comparison be done between this and Skylake-s? Skylake-x LCC lost in some cases to skylake, but is it due to lack of l3 cache or is it because the l3 cache is slower?
  • IGTrading - Monday, September 25, 2017 - link

    There will never be an IPC comparison of Intel's new processors, because all it would do is showcase how Intel's IPC actually went down from Broadwell and further down from KabyLake.

    Intel's IPC is a downtrend affair and this is not really good for click and internet traffic.

    Even worse : it would probably upset Intel's PR and that website will surely not be receiving any early review samples.
  • rocky12345 - Monday, September 25, 2017 - link

    Great review thank you. This is how a proper review is done. Those benchmarks we seen of the 18 core i9 last week were a complete joke since the guy had the chip over clocked to 4.2GHz on all core which really inflated the scores vs a stock Threadripper 16/32 CPU. Which was very unrealistic from a cooling stand point for the end users.

    This review had stock for stock and we got to see how both CPU camps performed out of the box states. I was a bit surprised the mighty 18 core CPU did not win more of the benches and when it did it was not by very much most of the time. So a 1K CPU vs a 2K CPU and the mighty 18 core did not perform like it was worth 1K more than the AMD 1950x or the 1920x for that matter. Yes the mighty i9 was a bit faster but not $1000 more faster that is for sure.
  • Notmyusualid - Thursday, September 28, 2017 - link

    I too am interested to see 'out of the box performance' also.

    But if you think ANYONE would buy this and not overclock - you'd have to be out of your mind.

    There are people out there running 4.5GHz on all cores, if you look for it.

    And what is with all this 'unrealistic cooling' I keep hearing about? You fit the cooling that fits your CPU. My 14C/28T CPU runs 162W 24/7 running BOINC, and is attached to a 480mm 4-fan all copper radiator, and hand on my heart, I don't think has ever exceeded 42C, and sits at 38C mostly.

    If I had this 7980XE, all I'd have to do is increase pump speed I expect.
  • wiyosaya - Monday, September 25, 2017 - link

    Personally, I think the comments about people that spend $10K on licenses having the money to go for the $2K part are not necessarily correct. Companies will spend that much on a license because they really do not have any other options. The high end Intel part in some benchmarks gets 30 to may be 50 percent more performance on a select few benchmarks. I am not going to debate that that kind of performance improvement is significant even though it is limited to a few benchmarks; however, to me that kind of increased performance comes at an extreme price premium, and companies that do their research on the capabilities of each platform vs price are not, IMO, likely to throw away money on a part just for bragging rights. IMO, a better place to spend that extra money would be on RAM.
  • HStewart - Monday, September 25, 2017 - link

    In my last job, they spent over $100k for software version system.

    In workstation/server world they are looking for reliability, this typically means Xeon.

    Gaming computers are different, usually kids want them and have less money, also they are always need to latest and greatest and not caring about reliability - new Graphics card comes out they replace it. AMD is focusing on that market - which includes Xbox One and PS 4

    For me I looking for something I depend on it and know it will be around for a while. Not something that slap multiple dies together to claim their bragging rights for more core.

    Competition is good, because it keeps Intel on it feat, I think if AMD did not purchase ATI they would be no competition for Intel at all in x86 market. But it not smart also - would anybody be serious about placing AMD Graphics Card on Intel CPU.
  • wolfemane - Tuesday, September 26, 2017 - link

    Hate to burst your foreign bubble but companies are cheap in terms of staying within budgets. Specially up and coming corporations. I'll use the company I work for as an example. Fairly large print shop with 5 locations along the US West coast that's been in existence since the early 70's. About 400 employees in total. Server, pcs, and general hardware only sees an upgrade cycle once every 8 years (not all at once, it's spread out). Computer hardware is a big deal in this industry, and the head of IT for my company Has done pretty well with this kind of hardware life cycle. First off, macs rule here for preprocessing, we will never see a Windows based pc for anything more than accessing the Internet . But when it comes to our servers, it's running some very old xeons.

    As soon as the new fiscal year starts, we are moving to an epyc based server farm. They've already set up and established their offsite client side servers with epyc servers and IT absolutely loves them.

    But why did I bring up macs? The company has a set budget for IT and this and the next fiscal year had budget for company wide upgrades. By saving money on the back end we were able to purchase top end graphic stations for all 5 locations (something like 30 new machines). Something they wouldn't have been able to do to get the same layout with Intel. We are very much looking forward to our new servers next year.

    I'd say AMD is doing more than keeping Intel on their feet, Intel got a swift kick in the a$$ this year and are scrambling.

Log in

Don't have an account? Sign up now