Intel trying desperately to hide the elephant in the room that Comet/Whiskey are more performant than Ice and easier to make, too.
also surprised they let that slide get out with "Sky Lake" vs. Skylake... they've only been refreshing the thing for 4 years now, you'd think they'd have settled on a name.
Yes, and Comet Lake also has two more Cores than the Ryzen 3000 APU series.
But as the above article says "To be honest, I agreed with Intel here – it wasn’t a graph designed to show like for like, but just how much performance is still on the table", so then both are perfectly valid comparisons.
Now, if the actual power consumption under those bursty loads was anywhere near 15W for Comet Lake is an entirely different question.
6-core with slow UHD 620 iGPU vs 4-core with fast G7 iGPU. ICL execution block is only 30 sq mm. And die is still significantly slower allowing much fast iGPU in TGL.
Exactly. When comparing like-for-like, Comet Lake comes out as being faster, more efficient and cheaper. It is basically the last "Skylake" part, where the 14nm node is pushed to extremes, and the architecture is optimised to perfection.
Ice Lake is worse. The node is not as good, too immature. The architecture is not as good, not as optimised. So it's easy decision for consumers to skip this. But Intel is trying to save-face with their investors, and product partners, hence why they are deliberately muddying the waters. The only thing going for Ice Lake is the better iGPU, but even that was clearly an after-thought.
I think consumers should skip the 9.5+0.1-gen Comet Lake as well. It's still Skylake, and not much of an improvement from its predecessor. However, the Ryzen-4000-U chips do seem to be worth upgrading to. And it's going to take Intel 2 years to catch-up (actually surpass) this level of efficiency and performance.... by which point AMD might still be in the lead.
As a general rule of thumb, you can make inferences to mobile from their desktop parts: Ryzen 1500X = Core i7-2600k to Core i7-4790k Ryzen 1600X = Core i7-3960x to Core i7-5930k Ryzen 1800X = Core i7-5960x
Ryzen 2500X = Core i7-6700k to Core i7-7700k Ryzen 2600X = Core i7-6800k to Core i7-7800x Ryzen 2700X = Core i7-6900k to Core i7-7820x
The 4000U series from AMD are the Zen2 based chips, so a 13-15% IPC boost plus the improved clock speeds. They are really the laptop version of the Ryzen 3000 desktop chips and we saw how well received THOSE have been.
You are right when it comes down to the problems that Intel is having. Very mature 14nm+++++ with high clock speeds vs. 10nm Intel chips that can't clock as high with lower core counts. Even if IPC increased by 18%, a 25% reduction in clock speeds would result in a net loss in performance. Go forward another year, 14nm will be even more mature, and 10nm still isn't ready for 8 core chips that can hit the clock speeds that AMD saw with the move to 7nm.
In this class of notebooks, battery life is paramount. I don't think any business man will buy ryzen 4000 series because it has 8 cores in 15w tdp, because it is built in 7nm process or because it gets a better cinebench or geekbench score. You are being a little bit childish. In this category everyone cares about battery life and companies/people usually buy established lines of notebooks like go elitebook, Dell precision, Lenovo Thinkpad and also very thin 2 in 1 designs which amd doesn't yet have. So if battery life is worse than what Intel has with comet lake or ice lake, this will not be a home run for AMD just because they have more cores. 45w laptops might be a different story indeed.
"Business" covers a remarkable number of activities. It's a bit far-fetched to say "any business man"; if anything, such an enormous generalization is a bit childish! I like a bit of portability for presentations, coffee shops, etc, but I'm rarely away from convenient power for more than 3-4 hours. Ryzen 4000 (might upgrade to it this year, why not!) looks like some nice icing on the cake, especially as far as the GPU is concerned...
Don't be pedantic. You're responding to a comment about the enterprise market with a sprawling definition abiut what business is. Also, what you like isn't what billion plus dollar companies purchase, which is why Intel has a market cap double digit times what amd does. I think ryzen is awesome, but the real money is in business computing, which the person you responded to got right.
The majority of business is kick the can. They are going to the next 10 increment of the model number currently in use. SSD are a given thank the lord, you get the 6 core if you're lucky that still gets murdered by at minimum 6 layers of AV/Security shim/patching/firewall & DNS/traffic decryption/drive encryption and then hell you're livin life if the "normal" model on offer has a full HD screen. That's "business" PC/laptop.
Gunbuster's description is closer to my experience. Unfortunately for AMD, many businesses are prone to habitual buying (same company and product line). It usually takes a fairly serious incident for such a company to change their habits. If AMD wants to break into this market, the best thing they can do is convince the major players to quietly add their options into the major product lines (rather than create an AMD specific line). They'd get even more market penetration if they can grab a default configuration or the cheapest configuration.
In the past, the major players (PC makers) have been cautious with AMD mostly due to lack of capacity and inconsistent execution. While they still don't have anywhere near the capacity to displace Intel in any major sense, they have been consistently executing since the first Ryzen hit the market. Also working in their favor is the fact that Intel's capacity is down and Intel has not been consistently executing (relatively speaking). That all said, it is hard to increase your market penetration when you are already selling every chip you can make.
In other words Intel have a problem of low supply and high demand, while AMD have a problem of low market penetration and thus no incentive to increase supply, despite having a *very* high, supply depleting demand in the last few years. Intel have only themselves to blame, but in the case of AMD shouldn't they at least match their high demand with adequate supply? If they wanted to could TSMC meet that high demand with a higher supply?
I don't believe it shows that it worst - that depending on test more cores may have benefits for Comet lake. But per code Ice Lake is better than both Comet like and Zen. 2020 we should higher core and higher watt version of 10nm - maybe with some architexture improvements Tiger Lake
I just don't believe that having more cores is the solution..
" than both Comet like and Zen. " of course you think its better then zen.. you are one of the very few that think that hstewart.. we will know for sure.. when sites are able to run tests...
" I just don't believe that having more cores is the solution " and why not ?? you still believe intels bs about the mainstream not needing more then 4 cores ??
"But per code.." That's apparently "per core", but no, there is no evidence that Ice Lake is faster than Zen 2 "per core". If Intel's 10nm node clocked as high as their latest 14nm++++ node variants it would have been. It doesn't though, and it doesn't look like it ever will.
And that leads to your next bit : In the entire 2020 the highest watt 10nm part expected from Intel is their pre-announced 28W Ice Lake-U. Nothing more. It will not have more cores though. Both it and *all* of Tiger Lake parts are going to be capped to 4 cores, with TDPs up to 28W (i.e. upclocked -U parts).
The desktop Ice Lake was canned and the desktop Tiger Lake was canned as well. The reason is that Intel has horrible yields at 10nm, 10nm+ and *every* 10nm node variant with dies above 4 cores and/or dies with very high clocks. When both somewhat large die sizes and high clocks are required, like in desktop parts, the yields are so low that it's impossible to fab anything without a big loss. However, since the Ice Lake Xeon parts have much lower clocks and only have large dies, Intel might still release them in Q4 of 2020 or, more likely, Q1 of 2021. Their yields are still low, but not intolerably so, and the very high price of Xeon parts can well afford lower yields.
Do not despair though, the Rocket Lake backport will come to the rescue of the hardcore Intel fanboys who would not "betray" Intel even if Ryzen was 50% faster and 50% more efficient in every single benchmark! Rocket Lake will sport brand new Willow Cove cores, a Gen12(Xe) iGPU and much higher clocks. The trade-off? (Since Intel cannot beat the laws of physics) it will have a low power efficiency, thus will be burdened with a much higher TDP than the equivalent Ryzen parts.
Comet Lake-S/H should be the last 14nm release based on Skylake. Rocket Lake-S/H will replace it, perhaps in late 2020. Since Intel royally screwed up their 10nm process node I strongly doubt *any* S/H part will ever be released on that node. The first post-14nm desktop part from Intel will be released in late 2021 at the earliest, at 7nm. In other words in 2022 at best.
p.s. About that pearl : "I just don't believe that having more cores is the solution.." Having more cores is basically the *only* solution since 2005/2006. That's when Dennard scaling collapsed, which froze the CPU clock rates. The collapse of Dennard scaling is the sole reason CPUs started having more than one cores, since just improving CPU designs for higher IPC was, and is, insufficient.
Tweaking and optimizing a car engine in order to increase performance can only take you so far. When further increasing the engine's RPM was no longer an option the engine needed to become bigger and wider, with more displacement and cylinders. The analogy might be a bit crude, but it's not invalid.
It seems to me that it's more an issue of trading blows. Both architecture and process have been optimized continuously and went through several iterations. I think they have reached the limit, and they know it, hence they bumped the GPU (and it does make a difference). It is an impressive feat to have reached this kind of optimization, BUT I think they fell behind Ryzen 4***, finally. They lots years of advantage, and if they don't move to 7nm soon they will keep slipping.
How do you know they are behind Ryzen 4000 Mobile, have you a review??, do you know the clock speeds??, they only say "up to"....so at 15 or at 25 W??, with a silent cooling or not??. Basically AMD say nothing, so better wait tiny and luxury Laptops to judge who is better. I want to remember you that Ice Lake is on 7nm (intel 10nm) right now.
and yet.. gondalf.... you say the same things when talking about intel vs ryzen moble 4000.. you keep saying how RM4000 still cant compete with intel.. you have no benchmarks.. or proof.. but you keep spitting out the same bs.. how is this any different ??
No, I don't have review: I have the data showcased by AMD *and* the comments made by who was at CES, including Anand's team. The GPU of Ryzen 4000 is clearly ahead. On the CPU side, we already have Zen2 beating Intel on desktop. Laptops are different beasts, but I would not expect totally different results. Hence why I said "I think". Time will tell, but for sure, few years ago Intel was light years ahead of AMD and today is is definitely not the case.
The GPU was already ahead in 3000 series. Now, based on preliminary benches it seems 4000 series because of lower cu count is only 10% better than ice lake with 64 eu
No, that's not true -- Ice Lake is out now and AMD 4000 is not. Intel's graphics at the moment are indeed ahead.
The trick with that is that the primary difference is memory bandwidth, which is finally rectified by placing a much-improved memory controller on-die.
You want your benchmark, look at where the money is going. OEM are not idiots, they make market choices based on hard data (wich they get to have with ample margin before launches). Given the sheer amount of new design wins for AMD I fully expect the Ryzen 4xxx series design to be at least on par with what Intel is going to offer in the near future. Given Intel contra revenue practices, maybe even better.
I would be interested in comparing the 'cell' libraries and structures at 7nm -- regardless of the fin/gate structures (and defects). Chipzillah backed-up with Kaby, and Comet at 14nm ('optimized') generally, to date, performs at IL 10nm (Is that fair?).
Presumably, 10+++ (!) forward optimizations will begat 7nm improvements to compete with TSMC NP5 (this year?). It's a dog-eat-dog world, and Intel is wearing Milkbone shorts right now.
It would, indeed. It is often said that Intel is very aggressive in terms of density. This was an advantage in the past, but it seems it is one of the reasons why 10nm is taking so long. Other thing is gate length, and 7nm is 7nm, meaning, it does bear some electrical advantages over 14nm (simplifying: lower power, everything else being the same). Thing is, AMD is already designing on TSMC's 5nm, while Intel does not seem to have anything on 7nm, and is struggling to make anything on 10nm. It seems that the gap is only going to widen moving forward.
The reason Intel is having so much of an issue with 10nm is the same reason AMD CPUs struggle to hit higher clock speeds. At 7nm/10nm, heat keeps you from clocking super high due to the density of the chip. Intel has historically relied on high clockspeeds for dominance, now they have a dilemma: either try and develop a chip that can clock high and stay cool, or drastically increase IPC to match. AMD started on a low power node, so they already have experience working in a limited thermal envelope. Intel has a fantastic 14nm process, but now they must deal with the reality of building a new CPU design optimized for 10nm. The process itself is not an issue. I expect Intel has 10nm down to a science. It is the architecture of the CPU.
I would love to provide more evidence of this, however I am on mobile right now. However, I will point out that overclocking results from the past few years tells the entire story, even absent other evidence. Die shrinks for free frequency increases ended somewhere around 32nm.
I'm not the guy to do a "Deep Dive" on functional/structural cell issues, and generally agree with what you are saying, but my understanding is AMD made a concerted effort with the 'Cat Cores' and high density cell libraries at 28nm.
So much so, that AMD 28nm cell density was generally equivalent to Chipzillah's 14nm, especially given that Kaby went backward on fin/gate structures. The issues, as I see them, is a proper mix of short/tall cells and logic, and execution. AMD and TMSC (and Global Foundries) has got their groove on -- Intel these days, not so much.
Intel will get their groove back, but at what cost? Adding 'bolt-on' EUs is not the answer, otherwise they're spinning their wheels until Xe matures and the uncore, logic, etc., find the right mix.
AMD 28nm density needed to be good because of how much space a Bulldozer core block took up. That's part of why people complained so much about Bulldozer -- it seemed like there was little reason a die-shrunk Thuban couldn't compete with Intel.
Psssst You did not respond to a thing I said, did you?
AMD evolved cell libraries significantly increasing density AND reducing power with their Carrizo APUs at 28nm (some at 'bulk' but I recall an ''LP' process, too).
And Dude ... your 'die-shrunk' talking point is old and busted.
That's pretty a-historical. Even assuming a meaningful clock boost could have been achieved from the die shrink (and this is a big if - Thuban didn't clock so well), it was still far behind Intel's competing architecture when it came to IPC.
It's not clear that this is true - they still haven't built anything larger than a 4 core chip on 10nm, and the most obvious reason for that would be to keep die size low so that yields remain high.
The existence of "10th generation" 14nm product lines bolsters this theory: they simple can't produce enough processors on 10nm to fulfil demand.
Was it confirmed they were both using the same chassis w/ the same cooling design & same PL2 & Tau? Because on the outset, it seems the 4C/8T 10nm+ Ice Lake (i7-1065G7) would be more power efficient than a 6C/12T 14nm+++ Comet Lake (i7-10710U).
The slides say "See backup for configuration detail", which doesn't share much.
The easy rebuttal from Intel: the Ice Lake system ran cooler and drew less power, which they'll call a "user experience win" (as Intel's TDPs can sandbag, e.g., "65W" 4C/4T CPUs often drew far less).
If these tests use Intel's SDS, I'd point to Anandtech's previous reporting:
"Like other reference designs (such as Qualcomm’s), these [Intel SDS] units are designed to work, and are for the best part thermally unconstrained. The fan is on all the time, there are massive bezels, and the device itself is a bit chunky, to provide all the ports that the chip can provide."
The 6-core Comet Lake does not necessarily have a higher TDP. If the chassis and cooling solution max out at a ~15W for continuous operation that's its limit (at base clock), with short or very short turbo bursts. Bear in mind that Intel calculates TDP only from the base clock. It takes no account of turbo clocks, which are either very short or a bit longer depending on how much time the cooling solution can thermally handle the TDP-up limit of 25W.
Comet Lake has 6 cores but a *much* lower turbo frequency (3.9 GHz instead of 4.7 GHz), so that its TDP-up while at the turbo frequency is also 25W, just like Ice Lake. It might sounds counter-intuitive but increasing cores while slashing base and turbo clocks is more power efficient than retaining the same number of cores while increasing both clocks. The trade-off of adding more cores is lower single core performance and larger dies (and thus lower yields and higher costs), not lower power efficiency - provided, of course, that clocks drop accordingly.
This aside doesn't really answer the question. It's possible the TDP is accurate across both systems, but it's possible the TDP is sandbagged. You'll note I said *can* be sandbagged, not that it *is*...because this is such a minor point to the overall flawed analysis.
You're confused on a few points:
-- Comet Lake has more cores *and* a higher Turbo, but a lower base clock and thus "equal" TDP.
-- Turbo Boost has nothing to do with TDP-up. Turbo is PL2, often 30 W to 50 W in most 15 W-designed motherboards. TDP-Up limits the base clocks, which we do understand. PL2 is a completely separate power level, completely unrestrained to whatever tiny TDP was given.
-- Comet Lake & Ice Lake boost **FAR** beyond 25 W. Dell's XPS 13 2-in-1 i7-1065G7 turbo boosts well past 40 W.
In the end, all the major questions posited by Anandtech's "analysis" are left unanswered. It's trying to reverse engineer a Windows dll on a TI-84 Calculator. It's possible, but not with the information & tools given.
1. Were all power & heat variables kept the same between these two tests? Heatsink size, fan size & speed most of all, but all the "little" stuff too (TIM, ambient, heatpipe length). When boosting to 50 W in such tiny systems, a small thermal dissipation difference can significantly alter boost speeds.
2. Most importantly and most critically, was this done in a typical laptop, that can't sustain a 50 W boost for very long? Or in the ridiculous, non-typical behemoth Intel SDS reference design that could ideally live in PL2's heat output perpetually?
If it's in SDS, I imagine the hex-core can scrap by wins not uncommonly. In thermally constrained environments = every 15 W-laptop design, I'd be curious how aggressively the Tau & PL2 are set, versus Intel's "ideal" system.
In conclusion, the i7-10710U is probably actually faster in a few cases, but by leaving it an unconstrained thermal system, you exaggerate the differences significantly.
I'm not trying to "attack" you, as your point (while small in this analysis) is accurate: you can lop off frequency (decrease power) and add more cores (increase power). That's...been true for decades. And it's true in Comet Lake, too:
It's high time we stop allowing reviewers to test thin-and-light laptop CPUs at insane power output. Who honestly wants 50 W just cascading through a thin metal chassis onto, presumably, your thighs? Overly aggressive PL2 states and too long Taus put out "great" benchmarks, but these things are nowhere near comfortable for hand-use at that point.
Laptop CPUs have different (and more interesting) constraints.
But, I do understand: Anandtech, every other media outlet, and consumers have been denied 10nm+ silicon for nearly half-a-decade, so I sympathize wanting to run serious analysis on whatever is available.
But, we should do these with systems in hand, instead of Intel's rightfully skewed benchmarks.
1065G7 loses in every CPU test and only wins in GPU tests vs a Core i7-8559U (4.5Ghz boost, 4core/8thread).
It also has the 15/25W scores. And the 15W scores lose to the 8000 series even worse.
So I dont understand the issue? The ice Lake CPU is released. And Comet Lake is basically a refresh of the 8000 series with slightly better turbo's and efficiency. So if an 8000 series beats Ice Lake at equal cores/threads, then a 10000 series with 2 more cores is going to destroy it.
Yeah, this was pretty expected and shows that 10 nm is still pretty bad. But I don't think people will feel performance difference as much as they will notice lower power (= better battery life) offered by LPDDR4 and other stuff present in Ice.
I dont even have time nor want to bother to verify Intel's data. Could anyone point out anything doggy from those Intel data against Ryzen 3700U? Were they in different Wattage profile?
They were against the Zen+ part, not the new 4000 series (Zen2). So obviously they won. Zen+ had lower IPC than 9th gen / 10th gen, way lower than Ice Lake.
Well, the top end system costing over $3k has two SSD in raid, an RTX 2080 that boosts very high (looks like desktop speeds), 32 GB RAM....basically top of the line everything.
It is pretty clear that they are in a difficult position, in which they have this new product but it can't outperform the old one. The main issue here is that Intel simply cannot sell Ice Lake in 500$ laptops because it is based on a very expensive and very long delayed process. So yeah, no real surprise here. They do what they can in these circumstances.
Seems like most laptops these days are thermally constrained. So without having each CPU in the same chassis it's really impossible to determine which is faster in the "real world."
If you look at actual reviews, most will give you battery run time at idle and light load (i.e. web browsing and watching a video) while also showing benchmark numbers at full load.
If many of them are bursty, i.e. do not require the CPU to run at full speed over longer periods, Comet lake should still come out ahead, as they should manage to go full Turbo for these brief periods.
So you get the best of both worlds in those reviews - high performance and great battery life in idle and surfing the web.
Very few sites actually do show laptop power consumption when gaming or running under high load.
1) Your buying a laptop 2) You're going to use the laptop the way it was made. Pulling the CPU out of the laptop and scoring higher with more thermal headroom doesn't make a better product. Because that's not how its being used! That's like claiming a 9900KS at 6.5GHz on liquid nitrogen is the reason its better than a 3950X.
Reviews are testing the configurations for sale of the product. For people who buy the product.
Not what the abstract performance "could" be outside of thermally constrained environments (that they come in). Otherwise, why not just test the desktop variant?
While Charlie often engages in what I'd call "aggressive editorialising", I'm not aware that he's ever explicitly lied. On the contrary, he's been proven right about most of the big news he's broken such as Nvidia's Bumpgate and Intel's 10nm woes.
I say this in the full knowledge that he is an egotist who deliberately uses the least balanced language possible in his articles.
But this clock isn't stopped. It's a moving target and he's still getting the GPS coordinates correct.
Even if you don't like the way he pictures it or his conclusions, the basis of all of his theories have been right lately.
--And don't use the "He said 10nm was dead" - Because it's obviously not alive now, is it? If they made 500m in revenue off Ice Lake like some claims, that's 1/34th of their entire quarterly business. That's nothing when laptops are around 30%
He's been right more often than wrong, though, so the "stopped clock" analogy breaks down pretty fast. Love him or hate him, he has good sources and does some serviceable analysis.
Would accelerated workloads leveraging AVX-512 on Ice Lake even change the picture? The AVX-512 base and turbo speeds are supposed to be a thing of the past on Ice Lake. However, has anyone actually measured turbo performance on Ice Lake with and without AVX-512 being used in the same system? If Ice Lake doesn't turbo as high, the wider AVX-512 unit and new instructions may not be enough to actually produce a performance win.
AVX Ice Lake wins. But it comes down to, what is AVX really being used in? Like Intel says in the article, only 7 applications in the past... 4 years? have finally adopted AVX.
Well, that's nothing new for Intel. Remember the slides showing how Cascade Lake X was going to offer up to 2x the performance per Dollar over Skylake.X, their previous HEDT platform ?
Those same slides showed that Threadripper 2 already offered up to 1.61x the performance over Skylake-X.
"Intel has worked with the software vendor to enable GPU acceleration." - You say this as if it were a bad thing! Apple does similar software optimizations in Final Cut Pro for the Mac and the results are fantastic. Common tasks can finish in half of the time compared to a similar PC running Adobe's Premiere Pro software. Also, remember Adobe's pricing model is by subscription, while Apple charges only a one time $299 fee for UNLIMITED use. You can see why Adobe has been forced to Optimize Premiere Pro for the Mac, just to remain competitive.
I'm seeing this trend a lot in recent reviews, where people focus on CPU aspects to magnify Intel's problems to generate that sensational headline. I get it .. you are vying for an audience. But In the real world.. things are more nuanced. I'm very interested in GPU performance.
Here is another example... My RTX 2080 can render the Blender BMW demo scene in just 51 seconds.. This compares well against AMD's Mighty Threadripper 3970X - A $2000 part. Here too, Blender has been optimized to use the GPU (CUDA and OPTIX) for rendering and the results are fantastic. And what is most surprising ... the GPU fans aren't screaming at me the whole time. My system remains nearly silent, unlike when I render the same scene on the CPU.
No, they don't say it as if it's a bad thing - they say it as if it's a thing that needs to be weighed against Intel's "real world benchmarking" marketing message, which they go on to discuss in more detail at the end of the article.
Basically, if you're on the one hand whining about canned benchmarks that your chips are losing and then, on the other hand, relying on canned benchmarks plus one niche application you've sponsored to *only use your GPU*, then the outcome is hypocrisy.
You either didn't read that, didn't understand, or didn't care and just came here for some Intel Damage Control.
I don't know what is wrong with the computers you buy or build, but I don't find myself waiting on tasks to complete in Word, Excel, or Powerpoint... Those PCMARK tests that they focused on aren't exactly killing my daily productivity. Unlike the two workflows that I mentioned... Blender Rendering and Video Editing.
This... doesn't make sense as a response to me. I also think the PCMARK tests are useless, even though they're sort-of "real world" according to Intel's hazy definition.
My point was that you totally misread what the article was saying. They're not criticising GPU acceleration, they're criticising the fact that Intel are putting a GPU accelerated test that they sponsored into their CPU speed comparison.
I did NOT misread the article.. these are mobile parts that have an integrated GPU. Vendors like Apple have applications that take advantage of that iGPU. YOU ARE NOT MAKING ANY SENSE.
Stop with the Intel apologising. They screwed up and are losing on most of their own metrics. 99% of theor users don't utilize avx512 and thats where AMD has them beat.
I'm not Intel Apologizing. I just don't understand why Anandtech would eliminate GPU benches from their tally of winning benchmarks on mobile parts, where the iGPU is an important factor for performance... Well, actually I do know why... It was to generate that click-bait headline.
The GPU's advantages are only constrained to Ice Lake because Intel didn't port the GPU to 14nm.
The article is primarily denoting that the clockspeeds from 14nm paired with the memory bandwidth from LPDDR4X mean that a last-generation node is delivering a more powerful CPU than the architecture improvements.
A similar-size GPU might still beat the newer generation of graphics, also due to clockspeed benefits, but we don't know that because the number of EUs in each part is dramatically different.
The tests you mentioned would be pretty "real world". They would also perform like a dog on Intel's iGPU, hence why they didn't use them and used a pocket benchmark instead.
It was just a couple of years ago that NVENC was added to Handbrake and Staxrip, before that it was added to a few commercial/freeware apps with mixed results.
It's only been a quality option for a short while now. Initially the only software that supported NVENC had little to no quality options aside from bitrate.
NVENC is useless in almost all use case other than converting video for your phone and realtime recoding while casting. Casting wasn't even a thing back then and streaming already took over.
“ I'm seeing this trend a lot in recent reviews, where people focus on CPU aspects to magnify Intel's problems to generate that sensational headline. I get it .. you are vying for an audience.”
Good point. I am seeing similar practice to many other sites as well. Catchy headlines and colorful opinions instead of presenting an information or test result as is. I guess it pleases the fanboys and generate traffic.
You see similar things on other sites? Great. It's not happening here. The headline is "Intel's confusing messaging - is Comet Lake better than Ice Lake". The article provides analysis of Intel's own publicly available figures in an attempt to answer that.
You know the trend I see? Obtuse comments that don't relate to the article in question but instead pick on some vague theoretical "trend" visible only to the commenter, which generate equally bland "won't somebody think of the children" style responses that don't actually move any discussion in any particular direction.
You're agreeing with a guy who's complaining that an article about CPU performance talks about the performance of CPUs. It doesn't get much more inane than this.
All you've "proven" is that these threads are being stacked with the kind of comments that piss me (personally) off, and that I've inexplicably taken it upon myself to respond to most of them. None of that demonstrates anything about Anandtech or the reasoning behind their editorial decisions, which was your claim.
Your inane replies continue to add to the body of evidence in favour of my point, namely that the threads are being astroturfed by gormless armchair critics and shills with nothing to add.
The "real-world benchmark" push would be great if they were being honest about it, but as it stands it's a painfully obvious case of "do as I say, not as I do".
That gaming slide in particular is the worst kind of misleading junk. They kick the graph off with 2 synthetic tests that are famous for producing results that naively (and unrealistically) scale with threads and clock speed, then follow with a "combined" test that's just one of those results balanced with the GPU results. The worst bit is that the remaining results would still give them a win based on a fair comparison, yet they still felt the need to shit up the graphs by highlighting that apples-to-steroid-enhanced-oranges trick they pulled with the GTX 2080. Their justification would only really make sense if they had separate graphs for "highest performance offerings". Grim stuff.
Three cheers for Ian putting the effort in to compare the CPU performance graphs and give us the Ice / Comet comparison. Not only is it genuinely useful, it makes their futile efforts to obfuscate the data all the more amusing.
Just because Intel said that real world application is what matters the most does not mean they can not show some advantage in some professional niche application. Their own benchmarks have mix of both. Noting wrong with that.
Ian's (valid) criticism about Intel's choice of benchmarks was mild and constitutes at most 10% of this article, if you're being particularly sensitive. One wonders why so many random low-quality comments chose to focus on that rather than the meat of the article.
As it is, your comment equates to this: "Just because they said one thing and did another, doesn't mean you're allowed to point out that they said one thing and did another."
Truly, we are living in the age of disinformation that somebody could hold such a worthless opinion and feel it worth posting.
Is it really that confusing? They are both listed as 10th generation, so their performance should be roughly on par with each other. What would be confusing is if they had different generations yet performed about the same.
Pro tip to the operator of the m53 sockpuppet: the whole "I agree with this guy" thing is less obvious when you don't do it multiple times for the same article.
The article proves its central claim using Intel's own evidence, which is that their 10nm processors don't outperform the 14nm ones. Simple stuff, and not very dramatic in presentation or impact.
Cue the reply guys griping about drama that's not there - to be precise, "trying your best to find an issue with the article when there is no issue".
like you yourself are doing ?? from what i have read.. you do more of the trolling m53.. then spunjji does.. at least his posts have something.. where yours seem to be mostly bs and fud... but.. let me guess.. your only rebuttle will be that this is just spunjji on a different account.. because you have nothing usefull to say...
yea ok.. cant prove your points against spunjji.. so now.. you are pretty much using garbage now... your posts are more troll like then his.. you seem to have skipped over eva02langley direct relpy to you..and instead have been attacking spunjji and now me.. because you canr prove anything you type..id guess anatun is your alt...
nope.. you cant prove your points, so you attack spunjji here, and are attacking me in another thread.. so who's the troll ?? when are YOU going to post on one of YOUR other 4 alts that you have on here??? or will you just keep attacking people when you no longer can prove anything ?? either way.. have a good day.. no point in replying to you as all yo do now.. is accuse people of crap you cant prove..
Spunjji aka Qasar aka Korguz aka Pro-troll: Not sure why you are getting angry. I am actually grateful to you for showing me some pro-tips. Sad that you won't reply to me anymore. Will miss your pro-tips. :(
that's what I've been thinking, Intel has basically split their mobile lineup into : Compute Core - better computing performance Graphics Core - better graphics performance
What is the meaning of the article? It seems like a hit at the marketing guys at intel. But aĺl that Ian is saying is true, i just don't understand why now he is actively criticizing Intel's marketing department, i mean in the past also, Intel's marketing had some low points but we didn't see the same judgemental stance. I'm surprised he didn't mention that the comparison with the new Comet lake stepping supporting LPDDR4X is potentially going to scale the results even more favourably for Comet Lake parts. Anyway i don't want to sound too judgemental, overall good article!
The article's point is to continue to spread Ian's frustrations with Intel. Its like his tarnished Xeon 9200 article, where Ian's only real complaints are that the price isn't public and that he didn't get a free one. It is just a "woe is me" post, move along. This site was better when opinions were out of the articles and it was just the facts.
The point is that Intel is hyping its 10nm as "18%" better IPC, everyone says its so much better than 14nm++.
And the article is asking, Is it though really? Ice Lake, lower turbo higher IPC 10th gen higher turbo lower IPC
1065G7 $426 - 4 cores 10710-u $443 - 6 threads
Not to mention Ice Lake's power draw is insane (but clock gating gives it a good battery life for benchmarks along with LPDDR4X and 1watt screens) Ice Lake is being marketed as a premium product, but its only $15 cheaper than the 6core part which is all around better except in graphics. So who would really want the 4 core over the 6 core?
Obviously manufacturers see this too, because there aren't that many Ice Lake laptops out.
Then add in the competition Zen2. 8c/16t 15W TDP at $399 tray price.
---- TL;DR When your telling your investors and the public "Everythings fine. Ice Lake's soo much better than 14nm", But then your own cherry-picked slides show 10th gen 14nm being better in Ice Lake in everything except graphics, and has 2 more cores.
And "overclocking" doessn't count here. Because its a laptop. So regardless of the IPC boost, if it ships with a 1GHz lower boost clock, it's going to stay 1GHz lower forever.
First-hand experience here: Refurbished 1065G7/16GB/512GB/1080P XPS13 2-in-1 is much harder to buy, and ~$150 more expensive than refurbished 10710U/16GB/1TB/4K regular XPS13. The only advantages of the 2-in-1, aside from the CPU and LPDDR4, are the digitizer and a different hinge.
I see your problem now, you don't want to actually discuss facts. Here is what you posted with your so-called quote: (A) "When your telling your investors and the public 'Everythings fine. Ice Lake's soo much better than 14nm'. And now you are posting: (B) "The 'quoted' part is what an Intel fanboy would say about Ice Lake being their savior"
Which is it? Is it (A) Intel telling their shareholders or (B) what an Intel fanboy would say? You can't have it both ways.
He really can; on the other hand you're accusing someone being openly critical of Intel of being an Intel shill, and you *really* cannot have that both ways.
All the Intel fanboys claim "Ice Lake's great, its going to kick ass" Intel in all of its slides is promoting it as its best product. Even the products its being put in are more expensive.
No, the article's point is that they shared figures that tell an interesting story but refused to tell that story themselves, and he was wondering why.
The "why" is that the story it tells doesn't look good for their product.
That was all in the article and I feel weird at having to explain it when it was written there, in English, the same language we are all commenting in here.
This site was never free from opinion. There used to be flame wars over how partial Anand himself was to Apple products. Take your "I miss the good old days" nonsense and can it.
One of the good things about Ice Lake is the Thunderbolt 3 / USB4 integrated into the CPU, offering up to 4 ports which aren't forced to share a measly PCIe 4x connection to the PCH like Titan Ridge.
Except, being one of the major benefits, you'd think manufacturers would be ready to offer laptops w those 4 supported ports, to finally compete w the MacBook Pro, and you'd be wrong. Not. A. One. :-(
who cares about thunderbolt.. not everyone needs or wants it.. for some its feature check point.. for others.. who cares... it also seems the reason why amd doesnt have it.. is because intel needs to certify it... and for some reason. intel wont cerify it for amds platforms.. some one mananged to get it to work on amds systems but had to use a add in card with a hacked bios/driver for it to work...
Actually plenty of people care about thunderbolt. I have seen comments like, “who cares about a laptop”, “who cares about desktop”, “who cares about a PC”, and so on. Just because you don’t use it doesn’t mean there aren’t plenty of people who want to use it.
and the same can be said the other way as well.. no one i know cares if a comp has it or not.. but you may know peope that want it... i just find it seems there is too much weight on if a comp has it or not.. like if it doesnt have it.. then its garbage and not worth buying..
No, I didn't buy it just for TB3, but I definitely wouldn't have bought it without TB3, and if there were a similar model with TB3 off the CPU, that would be the one I'd get.
Are you complaining about the AVX-512 instructions too? Cuz not everyone uses those. Intel is struggling to find *anyone* to use those.
Lots of people have laptops, and lots of people are interested in eGPU's for gaming or graphics work on their laptop. That's who cares about Thunderbolt.
And where did I even imply *everyone* had to care about it? I'm merely re-enforcing the article, in that, if you take away perf, does Ice Lake still bring anything to the table? So, if you were going to answer Thunderbolt, cuz you were excited for those 4 high speed ports it enables, you can forget that, cuz OEM's are lazy and haven't bothered to update their Ice Lake designs to even take advantage.
The OEM still has to figure out port positioning, power and signal routing... etc. It's a better situation than it was before, but implementing Thunderbolt is still tricky and still adds cost, whereas most PC OEMs would rather save a few cents on BoM. :/
It's kind of weird that the AMD system isn't listed in the comment here, as given it's the same it means that Intel approximated both Intel systems as examples of the AMD system.
So if
AMD system was ~ Ice and AMD system was ~ Comet
But you're saying Ice is not ~Comet?
Then the comparisons with AMD are a bit FUD then. You can't have it both ways.
But, Ian, Intel itself has said Ice ~ Comet. That is why they are both 10th generation.
You are over thinking it. It is as if you want Intel's 10 nm to be better than 14 nm+++, and then are sad that it isn't better. This is one of the few times that Intel's marketing is quite clear. They are both 10th generation and will trade punches depending on the application.
Intel doesn't make a direct comparison between Ice and Comet in these slides, yet when you put them side by side, which this article does, Comet comes out ahead in more meaningful real-world tasks than Ice. That's precisely the point of the article.
At one point, Anandtech had a rule-of-thumb that differences less than 10% were not noticeable. In general, I agree with that point. The only real tests where that is not valid tend to be calculations that take days/weeks/months and aren't even included in reviews anyways.
Given that rule-of-thumb, only two tests were significantly different: PowerPoint PDF Export and Word Mail Merge Error Check. Neither of which are that important of a use case and Comet won one while Ice won the other.
So, I put them side-by-side and still see them as basically equivalent.
That's just you rationalising the fact that one is performing better than the other on average. Exporting PowerPoint presentations as PDF is an extremely common use case in any corporate or academic environment.
If 10 percent doesn't matter in cases except where calculations where it takes a long time, why is the 9900K for example still the recommended CPU for high-refresh gaming?
So no, 10 percent difference is not equivalent under any circumstance.
When it comes to tech, isn't "overthinking it" what Anandtech is for?
Your use of emotive language to describe Ian's findings tells us more about how you perceive this than it does about the article. Ian's not "sad" Ice Lake isn't better, he literally just used Intel's own numbers to demonstrate that fact.
Put yourself in the shoes of the masses who don't follow the intricacies of processors. Both cost almost the same, both are 10th generation, both perform about the same. Why should Intel try to differentiate them? Trying to compare and contrast the two would just confuse the general public when they should be considered roughly equivalent CPUs (with some differences depending on exact use case).
The only reason to keep both lines is so that Intel can keep up with the high demand for processors in this time of production limitations.
Six wins for Comet, three for Ice(one of them being irrelevant, as the article correctly argues) and two draws. That's not roughly equivalent by any metric.
The problem with counting wins instead of looking at win magnitude is that I could run 10,000 essentially similar benchmarks with CPU #1 in a 0.1% lead and 10 benchmarks with CPU #2 in a 100% lead. So, does that make that CPU #1 a thousand times better? No, if there was any winner it would be CPU #2.
What really matters is breadth and depth. How many completely different applications is a CPU ahead in, and is the lead significant enough to be noticeable by a human? Then you can say a CPU is in the lead. Here, only 2 benchmarks would be noticed by a human and they are split with each CPU winning one.
But at the same time, Intel picked these benchmarks to be meaningful. Not reviewers. So this should basically show the full picture, if these are what was chosen as represenative.
(And it was against a Zen+ APU, so obviously it was going to win either way. There was no need to skew the applications used)
Console emulation testing... Would you guys consider adding PS3? RPCS3 isn't as far along as Dolphin, but it takes better advantage of >4 threads and is more demanding.
While exporting PowerPoint to PDF is realistic workload, it's also pretty much irrelevant from performance perspective. Today I was exporting 300+ page presentation to PDF and on laptop with i5-8350U it took less than 1.5 minute. That's (i) less time than I need to write cover e-mail, (ii) irrelevant from perspective of how much time went into creating the presentation. With respect to multithreading, only 1 and half thread was active.
That does not mean PowerPoint is "solved problem" from performance perspective. There are still things that bug me, though we are talking mostly about "snappiness". None of these things are snappy enough: loading file, quick view in slide sorter view, quick transition between modes, copying/moving larger number of slides, synchronization over SharePoint, and saving files.
One task I have to do frequently on my work laptop which can take a while is to do the reverse: convert a (big) PDF into a WORD or PowerPoint file so I can edit it. That can get my current i7 (U) hot enough to get the fan running at full tilt, and take a while. I wonder if that would be of interest to others, too?
🙄well yeah, according to your first review of Ice Lake. Intel's fault here is the lack of processing cores in Ice Lake despite the smaller die size. It is too small vs Comet Lake, should have been at least 6 Ice Lake vs 6 Comet Lake cores.
So, what is the single core/single thread performance across several CPU-limited tasks of Ice Lake and Comet Lake laptops specified for the same TDP? Is there such a tabulation? That kind of information would answer at least just how comparable they are across various laptop manufacturers. Based on Ian's numbers, the "mature" 14 nm Comet Lake looks like it probably holds its own against Ice Lake, and beats it for several tasks due to higher single core turbo frequency. But, again, a table with data from several systems for each would be helpful.
It's not 10th gen, but even against 8th gen it is shocking. And the 1065G7 is in an SDS which is going to have better thermal headroom than probably all production parts.
And yet... Cinebench Single Core i7-8559u (4c/8t 4.5boost) 188 points Core i7-1065G7 (4c/8t 3.9boost) 183/176 (25/15W)
Cinebench Multi 8559u - 671 min 798 max (probably thermal limiting?) 1065g7 - 751/521 (25/15W)
There's alot more inside the link if you want to see how it compares to the old gen. The new gen has 400mhz higher boost, and 2 extra cores.. :-(
Oh boy, Im starting to avoid articles and reviews about Intel. It makes me cringe so hard, seeing how low Intel has fallen. When are we supposed to get 7 nm again? 22? Or was it 25? Oh boy... First time since 20 years that I am considering an AMD processor again.
Are these being compared with like-for-like TDPs? Both the AMD and Intel chips are theoretically 15W parts, but the R7-3700U has a cTDP range of 12-35W and the i7-1065G7 has a cTDP range of 12-25W. Since it doesn't specify on the comparison slides (and the URL they contain is out of date), for all we know we could be looking at a 12W R7 versus a 25W i7.
I also find the GPU acceleration figures a bit odd. I would expect the Vega 10 to significantly outperform the Iris Plus... Or is the Topaz Labs test supposed to be AVX512?
The Topaz Labs test appears to be written specifically to benefit from Intel's GPU. Vega 10 trades blows with the GPU in the 1065G7 - it tends to win in real-world benchmarks, but not by a lot.
"For a technology like AVX-512 to only have eight enhanced consumer applications several years after its first launch isn’t actually that great": AVX-512 is not widely available in consumer CPUs. Previous-gen Ryzen did not natively support even AVX-256 and did fine in consumer segment.
AVX-512 is used automatically for vector operations in any compute-heavy software, so you could say that it covers 99% of use cases where it's actually important.
I think you are misinterpreting and mispresenting Intel’s rhetoric about real-world benchmarking. They by no means told the press to exclude niche applications from their test suites. Intel’s message to the press was instead the twofold. First, the press needs to stop treating Cinebench as the ultimate benchmark, as if it were representative of most real-world workloads. Cinebench is instead only representative of performance in Cinema 4D and arguably a few other tile-based rendering programs. Now Cinema 4D is, of course, a real-world application but according to Intel it ranks below 1000 in popularity so it makes little sense for it to be treated as representative of performance of over 1000 other more popular applications. Second, the press needs to have a richer and more diverse test suite and include more real-world workloads to help people in their purchasing decisions by including software and workloads they will likely use. That test suite may still very well include Cinebench, but it needs to be 1 benchmark in, say, 30-40 (and that with the appropriate weighting), not 1 in just 3-4 (with almost of the productivity weighting). If you are just going to include 3-4 benchmarks, you better pick software/workloads that are far more popular than tile-based rendering.
" the press needs to stop treating Cinebench as the ultimate benchmark " thats funny.. didnt intel use this same benchmark to tout how much faster its cpus were over AMDs before zen came out, but now that is shows amd in the lead, they no longer are considering relevant ? come on, intel is back to cherry picking benchmarks to try to show its products are better then amds. and who knows what else it will also do now.. PCWarrior you still believe intels lies and BS??
Intel market two similarly performing products as 10th gen. They literally never claimed ice lake to be better than comet lake. It is apparent from their 10th get lineup. Both Ice Lake and Comet lake are 10th gen. One is better in raw CPU performance while the other is better in graphics, connectivity, and platform features. Why a tech site publish a hit piece arguing over it is beyond me.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
199 Comments
Back to Article
drexnx - Thursday, January 16, 2020 - link
Intel trying desperately to hide the elephant in the room that Comet/Whiskey are more performant than Ice and easier to make, too.also surprised they let that slide get out with "Sky Lake" vs. Skylake... they've only been refreshing the thing for 4 years now, you'd think they'd have settled on a name.
yeeeeman - Thursday, January 16, 2020 - link
Comet Lake has 2 more cores and that is the end of that...Irata - Thursday, January 16, 2020 - link
Yes, and Comet Lake also has two more Cores than the Ryzen 3000 APU series.But as the above article says "To be honest, I agreed with Intel here – it wasn’t a graph designed to show like for like, but just how much performance is still on the table", so then both are perfectly valid comparisons.
Now, if the actual power consumption under those bursty loads was anywhere near 15W for Comet Lake is an entirely different question.
regsEx - Sunday, January 19, 2020 - link
6-core with slow UHD 620 iGPU vs 4-core with fast G7 iGPU. ICL execution block is only 30 sq mm. And die is still significantly slower allowing much fast iGPU in TGL.regsEx - Sunday, January 19, 2020 - link
smaller*Kangal - Friday, January 17, 2020 - link
Exactly.When comparing like-for-like, Comet Lake comes out as being faster, more efficient and cheaper. It is basically the last "Skylake" part, where the 14nm node is pushed to extremes, and the architecture is optimised to perfection.
Ice Lake is worse. The node is not as good, too immature. The architecture is not as good, not as optimised. So it's easy decision for consumers to skip this. But Intel is trying to save-face with their investors, and product partners, hence why they are deliberately muddying the waters. The only thing going for Ice Lake is the better iGPU, but even that was clearly an after-thought.
I think consumers should skip the 9.5+0.1-gen Comet Lake as well. It's still Skylake, and not much of an improvement from its predecessor. However, the Ryzen-4000-U chips do seem to be worth upgrading to. And it's going to take Intel 2 years to catch-up (actually surpass) this level of efficiency and performance.... by which point AMD might still be in the lead.
As a general rule of thumb, you can make inferences to mobile from their desktop parts:
Ryzen 1500X = Core i7-2600k to Core i7-4790k
Ryzen 1600X = Core i7-3960x to Core i7-5930k
Ryzen 1800X = Core i7-5960x
Ryzen 2500X = Core i7-6700k to Core i7-7700k
Ryzen 2600X = Core i7-6800k to Core i7-7800x
Ryzen 2700X = Core i7-6900k to Core i7-7820x
Ryzen 3400G = Core i3-8350k to Core i3-9350kf
Ryzen 3600X = Core i5-9600k to Core i7-8700k
Ryzen 3800X = Core i9-9800x to Core i9-9900k
Ryzen 3900X >>> Core i9-10920x
Ryzen 3950X >>> Core i9-10940x
Targon - Friday, January 17, 2020 - link
The 4000U series from AMD are the Zen2 based chips, so a 13-15% IPC boost plus the improved clock speeds. They are really the laptop version of the Ryzen 3000 desktop chips and we saw how well received THOSE have been.You are right when it comes down to the problems that Intel is having. Very mature 14nm+++++ with high clock speeds vs. 10nm Intel chips that can't clock as high with lower core counts. Even if IPC increased by 18%, a 25% reduction in clock speeds would result in a net loss in performance. Go forward another year, 14nm will be even more mature, and 10nm still isn't ready for 8 core chips that can hit the clock speeds that AMD saw with the move to 7nm.
yeeeeman - Saturday, January 18, 2020 - link
In this class of notebooks, battery life is paramount. I don't think any business man will buy ryzen 4000 series because it has 8 cores in 15w tdp, because it is built in 7nm process or because it gets a better cinebench or geekbench score. You are being a little bit childish. In this category everyone cares about battery life and companies/people usually buy established lines of notebooks like go elitebook, Dell precision, Lenovo Thinkpad and also very thin 2 in 1 designs which amd doesn't yet have. So if battery life is worse than what Intel has with comet lake or ice lake, this will not be a home run for AMD just because they have more cores. 45w laptops might be a different story indeed.Carmen00 - Saturday, January 18, 2020 - link
"Business" covers a remarkable number of activities. It's a bit far-fetched to say "any business man"; if anything, such an enormous generalization is a bit childish! I like a bit of portability for presentations, coffee shops, etc, but I'm rarely away from convenient power for more than 3-4 hours. Ryzen 4000 (might upgrade to it this year, why not!) looks like some nice icing on the cake, especially as far as the GPU is concerned...lipscomb88 - Saturday, January 18, 2020 - link
Don't be pedantic. You're responding to a comment about the enterprise market with a sprawling definition abiut what business is. Also, what you like isn't what billion plus dollar companies purchase, which is why Intel has a market cap double digit times what amd does. I think ryzen is awesome, but the real money is in business computing, which the person you responded to got right.Spunjji - Monday, January 20, 2020 - link
Love too compare companies' current product ranges by comparing their market capThis is definitely a good way too compare a products
Being pedantic is a reasonable response to a poster who makes unqualified statements.
Gunbuster - Monday, January 20, 2020 - link
The majority of business is kick the can. They are going to the next 10 increment of the model number currently in use. SSD are a given thank the lord, you get the 6 core if you're lucky that still gets murdered by at minimum 6 layers of AV/Security shim/patching/firewall & DNS/traffic decryption/drive encryption and then hell you're livin life if the "normal" model on offer has a full HD screen. That's "business" PC/laptop.Gunbuster - Monday, January 20, 2020 - link
And to be clear the normal business users input on battery life for a corporate fleet model decision is effectively zero.BurntMyBacon - Tuesday, January 21, 2020 - link
This. Also, the ones making the decision often times get a non-fleet model for their personal use.BurntMyBacon - Tuesday, January 21, 2020 - link
Gunbuster's description is closer to my experience. Unfortunately for AMD, many businesses are prone to habitual buying (same company and product line). It usually takes a fairly serious incident for such a company to change their habits. If AMD wants to break into this market, the best thing they can do is convince the major players to quietly add their options into the major product lines (rather than create an AMD specific line). They'd get even more market penetration if they can grab a default configuration or the cheapest configuration.In the past, the major players (PC makers) have been cautious with AMD mostly due to lack of capacity and inconsistent execution. While they still don't have anywhere near the capacity to displace Intel in any major sense, they have been consistently executing since the first Ryzen hit the market. Also working in their favor is the fact that Intel's capacity is down and Intel has not been consistently executing (relatively speaking). That all said, it is hard to increase your market penetration when you are already selling every chip you can make.
Santoval - Tuesday, January 21, 2020 - link
In other words Intel have a problem of low supply and high demand, while AMD have a problem of low market penetration and thus no incentive to increase supply, despite having a *very* high, supply depleting demand in the last few years. Intel have only themselves to blame, but in the case of AMD shouldn't they at least match their high demand with adequate supply? If they wanted to could TSMC meet that high demand with a higher supply?Spunjji - Thursday, January 23, 2020 - link
BurntMyBacon and GunBuster both bringing the reality here.HStewart - Saturday, January 18, 2020 - link
I don't believe it shows that it worst - that depending on test more cores may have benefits for Comet lake. But per code Ice Lake is better than both Comet like and Zen. 2020 we should higher core and higher watt version of 10nm - maybe with some architexture improvements Tiger LakeI just don't believe that having more cores is the solution..
Korguz - Saturday, January 18, 2020 - link
" than both Comet like and Zen. " of course you think its better then zen.. you are one of the very few that think that hstewart.. we will know for sure.. when sites are able to run tests..." I just don't believe that having more cores is the solution " and why not ?? you still believe intels bs about the mainstream not needing more then 4 cores ??
Spunjji - Monday, January 20, 2020 - link
HStewart won't believe having more cores is the solution until Intel has more cores.See also: cache, clock speed, IPC... delete / reverse opinion as appropriate.
Korguz - Monday, January 20, 2020 - link
i doubt he will.. he still believes all of what intel says...Santoval - Tuesday, January 21, 2020 - link
"But per code.." That's apparently "per core", but no, there is no evidence that Ice Lake is faster than Zen 2 "per core". If Intel's 10nm node clocked as high as their latest 14nm++++ node variants it would have been. It doesn't though, and it doesn't look like it ever will.And that leads to your next bit : In the entire 2020 the highest watt 10nm part expected from Intel is their pre-announced 28W Ice Lake-U. Nothing more. It will not have more cores though. Both it and *all* of Tiger Lake parts are going to be capped to 4 cores, with TDPs up to 28W (i.e. upclocked -U parts).
The desktop Ice Lake was canned and the desktop Tiger Lake was canned as well. The reason is that Intel has horrible yields at 10nm, 10nm+ and *every* 10nm node variant with dies above 4 cores and/or dies with very high clocks. When both somewhat large die sizes and high clocks are required, like in desktop parts, the yields are so low that it's impossible to fab anything without a big loss.
However, since the Ice Lake Xeon parts have much lower clocks and only have large dies, Intel might still release them in Q4 of 2020 or, more likely, Q1 of 2021. Their yields are still low, but not intolerably so, and the very high price of Xeon parts can well afford lower yields.
Do not despair though, the Rocket Lake backport will come to the rescue of the hardcore Intel fanboys who would not "betray" Intel even if Ryzen was 50% faster and 50% more efficient in every single benchmark! Rocket Lake will sport brand new Willow Cove cores, a Gen12(Xe) iGPU and much higher clocks. The trade-off? (Since Intel cannot beat the laws of physics) it will have a low power efficiency, thus will be burdened with a much higher TDP than the equivalent Ryzen parts.
Comet Lake-S/H should be the last 14nm release based on Skylake. Rocket Lake-S/H will replace it, perhaps in late 2020. Since Intel royally screwed up their 10nm process node I strongly doubt *any* S/H part will ever be released on that node. The first post-14nm desktop part from Intel will be released in late 2021 at the earliest, at 7nm. In other words in 2022 at best.
Santoval - Tuesday, January 21, 2020 - link
p.s. About that pearl : "I just don't believe that having more cores is the solution.."Having more cores is basically the *only* solution since 2005/2006. That's when Dennard scaling collapsed, which froze the CPU clock rates. The collapse of Dennard scaling is the sole reason CPUs started having more than one cores, since just improving CPU designs for higher IPC was, and is, insufficient.
Tweaking and optimizing a car engine in order to increase performance can only take you so far. When further increasing the engine's RPM was no longer an option the engine needed to become bigger and wider, with more displacement and cylinders. The analogy might be a bit crude, but it's not invalid.
Korguz - Tuesday, January 21, 2020 - link
or adding a power adder to that car.. aka a turbo or supercharger...Spunjji - Thursday, January 23, 2020 - link
Slid tip on Dennard scaling - you learn something new every day!Multi-cores is more like having more cars than a faster car / wider engine, though :)
yankeeDDL - Thursday, January 16, 2020 - link
It seems to me that it's more an issue of trading blows. Both architecture and process have been optimized continuously and went through several iterations. I think they have reached the limit, and they know it, hence they bumped the GPU (and it does make a difference).It is an impressive feat to have reached this kind of optimization, BUT I think they fell behind Ryzen 4***, finally. They lots years of advantage, and if they don't move to 7nm soon they will keep slipping.
yankeeDDL - Thursday, January 16, 2020 - link
*They lost...Gondalf - Thursday, January 16, 2020 - link
How do you know they are behind Ryzen 4000 Mobile, have you a review??, do you know the clock speeds??, they only say "up to"....so at 15 or at 25 W??, with a silent cooling or not??. Basically AMD say nothing, so better wait tiny and luxury Laptops to judge who is better.I want to remember you that Ice Lake is on 7nm (intel 10nm) right now.
TheNorthRemembers - Thursday, January 16, 2020 - link
Gondalf, don't worry, your intel stock won't plummet too much. You'll be OK.Korguz - Thursday, January 16, 2020 - link
and yet.. gondalf.... you say the same things when talking about intel vs ryzen moble 4000.. you keep saying how RM4000 still cant compete with intel.. you have no benchmarks.. or proof.. but you keep spitting out the same bs.. how is this any different ??yankeeDDL - Friday, January 17, 2020 - link
No, I don't have review: I have the data showcased by AMD *and* the comments made by who was at CES, including Anand's team. The GPU of Ryzen 4000 is clearly ahead. On the CPU side, we already have Zen2 beating Intel on desktop. Laptops are different beasts, but I would not expect totally different results. Hence why I said "I think". Time will tell, but for sure, few years ago Intel was light years ahead of AMD and today is is definitely not the case.yeeeeman - Saturday, January 18, 2020 - link
The GPU was already ahead in 3000 series. Now, based on preliminary benches it seems 4000 series because of lower cu count is only 10% better than ice lake with 64 euyankeeDDL - Saturday, January 18, 2020 - link
I read exactly the opposite (https://wccftech.com/amd-ryzen-4000-renoir-vega-gp...lmcd - Saturday, January 18, 2020 - link
No, that's not true -- Ice Lake is out now and AMD 4000 is not. Intel's graphics at the moment are indeed ahead.The trick with that is that the primary difference is memory bandwidth, which is finally rectified by placing a much-improved memory controller on-die.
Korguz - Saturday, January 18, 2020 - link
and you have proof of this ??yankeeDDL - Tuesday, January 21, 2020 - link
I think the only data available, as of today, is this: https://www.reddit.com/r/Amd/comments/emp35f/new_b...El_Rizzo - Friday, January 17, 2020 - link
You want your benchmark, look at where the money is going. OEM are not idiots, they make market choices based on hard data (wich they get to have with ample margin before launches). Given the sheer amount of new design wins for AMD I fully expect the Ryzen 4xxx series design to be at least on par with what Intel is going to offer in the near future. Given Intel contra revenue practices, maybe even better.Smell This - Thursday, January 16, 2020 - link
I would be interested in comparing the 'cell' libraries and structures at 7nm -- regardless of the fin/gate structures (and defects). Chipzillah backed-up with Kaby, and Comet at 14nm ('optimized') generally, to date, performs at IL 10nm (Is that fair?).
Presumably, 10+++ (!) forward optimizations will begat 7nm improvements to compete with TSMC NP5 (this year?). It's a dog-eat-dog world, and Intel is wearing Milkbone shorts right now.
Don't look back, Gondaft. Something might be gaining on you.
http://satchelpaige.com/quote2.html
yankeeDDL - Friday, January 17, 2020 - link
It would, indeed. It is often said that Intel is very aggressive in terms of density. This was an advantage in the past, but it seems it is one of the reasons why 10nm is taking so long.Other thing is gate length, and 7nm is 7nm, meaning, it does bear some electrical advantages over 14nm (simplifying: lower power, everything else being the same).
Thing is, AMD is already designing on TSMC's 5nm, while Intel does not seem to have anything on 7nm, and is struggling to make anything on 10nm. It seems that the gap is only going to widen moving forward.
eek2121 - Friday, January 17, 2020 - link
The reason Intel is having so much of an issue with 10nm is the same reason AMD CPUs struggle to hit higher clock speeds. At 7nm/10nm, heat keeps you from clocking super high due to the density of the chip. Intel has historically relied on high clockspeeds for dominance, now they have a dilemma: either try and develop a chip that can clock high and stay cool, or drastically increase IPC to match. AMD started on a low power node, so they already have experience working in a limited thermal envelope. Intel has a fantastic 14nm process, but now they must deal with the reality of building a new CPU design optimized for 10nm. The process itself is not an issue. I expect Intel has 10nm down to a science. It is the architecture of the CPU.I would love to provide more evidence of this, however I am on mobile right now. However, I will point out that overclocking results from the past few years tells the entire story, even absent other evidence. Die shrinks for free frequency increases ended somewhere around 32nm.
Smell This - Saturday, January 18, 2020 - link
I'm not the guy to do a "Deep Dive" on functional/structural cell issues, and generally agree with what you are saying, but my understanding is AMD made a concerted effort with the 'Cat Cores' and high density cell libraries at 28nm.
So much so, that AMD 28nm cell density was generally equivalent to Chipzillah's 14nm, especially given that Kaby went backward on fin/gate structures. The issues, as I see them, is a proper mix of short/tall cells and logic, and execution. AMD and TMSC (and Global Foundries) has got their groove on -- Intel these days, not so much.
Intel will get their groove back, but at what cost? Adding 'bolt-on' EUs is not the answer, otherwise they're spinning their wheels until Xe matures and the uncore, logic, etc., find the right mix.
lmcd - Saturday, January 18, 2020 - link
AMD 28nm density needed to be good because of how much space a Bulldozer core block took up. That's part of why people complained so much about Bulldozer -- it seemed like there was little reason a die-shrunk Thuban couldn't compete with Intel.Smell This - Saturday, January 18, 2020 - link
PsssstYou did not respond to a thing I said, did you?
AMD evolved cell libraries significantly increasing density AND reducing power with their Carrizo APUs at 28nm (some at 'bulk' but I recall an ''LP' process, too).
And Dude ... your 'die-shrunk' talking point is old and busted.
Spunjji - Monday, January 20, 2020 - link
That's pretty a-historical. Even assuming a meaningful clock boost could have been achieved from the die shrink (and this is a big if - Thuban didn't clock so well), it was still far behind Intel's competing architecture when it came to IPC.Spunjji - Monday, January 20, 2020 - link
"The process itself is not an issue."It's not clear that this is true - they still haven't built anything larger than a 4 core chip on 10nm, and the most obvious reason for that would be to keep die size low so that yields remain high.
The existence of "10th generation" 14nm product lines bolsters this theory: they simple can't produce enough processors on 10nm to fulfil demand.
ikjadoon - Thursday, January 16, 2020 - link
Was it confirmed they were both using the same chassis w/ the same cooling design & same PL2 & Tau? Because on the outset, it seems the 4C/8T 10nm+ Ice Lake (i7-1065G7) would be more power efficient than a 6C/12T 14nm+++ Comet Lake (i7-10710U).The slides say "See backup for configuration detail", which doesn't share much.
The easy rebuttal from Intel: the Ice Lake system ran cooler and drew less power, which they'll call a "user experience win" (as Intel's TDPs can sandbag, e.g., "65W" 4C/4T CPUs often drew far less).
ikjadoon - Thursday, January 16, 2020 - link
If these tests use Intel's SDS, I'd point to Anandtech's previous reporting:"Like other reference designs (such as Qualcomm’s), these [Intel SDS] units are designed to work, and are for the best part thermally unconstrained. The fan is on all the time, there are massive bezels, and the device itself is a bit chunky, to provide all the ports that the chip can provide."
Source: https://www.anandtech.com/show/14664/testing-intel...
In a thermally unconstrained system, the hex-core with massive boost is going to beat the quad-core at average boost—and I'd not be surprised.
But how many laptops today are anything close to "thermally unconstrained"? In an actual laptop design meant for 15W, how would they compare?
Santoval - Thursday, January 16, 2020 - link
The 6-core Comet Lake does not necessarily have a higher TDP. If the chassis and cooling solution max out at a ~15W for continuous operation that's its limit (at base clock), with short or very short turbo bursts. Bear in mind that Intel calculates TDP only from the base clock. It takes no account of turbo clocks, which are either very short or a bit longer depending on how much time the cooling solution can thermally handle the TDP-up limit of 25W.Comet Lake has 6 cores but a *much* lower turbo frequency (3.9 GHz instead of 4.7 GHz), so that its TDP-up while at the turbo frequency is also 25W, just like Ice Lake. It might sounds counter-intuitive but increasing cores while slashing base and turbo clocks is more power efficient than retaining the same number of cores while increasing both clocks. The trade-off of adding more cores is lower single core performance and larger dies (and thus lower yields and higher costs), not lower power efficiency - provided, of course, that clocks drop accordingly.
Fataliity - Thursday, January 16, 2020 - link
Whatt???Ice Lake - 4c/8t, 3.9GHz boost
10710u - 6c/12t, 4.7GHz boost.
Ice Lake hits up to 50W in the surface review. Not 25W.
10710u probably also hits 50W in boost, until thermal constrains it to ~30Watt.
Fataliity - Thursday, January 16, 2020 - link
Also, the tray price is almost identical. Around $420 per CPU.ikjadoon - Friday, January 17, 2020 - link
This aside doesn't really answer the question. It's possible the TDP is accurate across both systems, but it's possible the TDP is sandbagged. You'll note I said *can* be sandbagged, not that it *is*...because this is such a minor point to the overall flawed analysis.You're confused on a few points:
-- Comet Lake has more cores *and* a higher Turbo, but a lower base clock and thus "equal" TDP.
-- Turbo Boost has nothing to do with TDP-up. Turbo is PL2, often 30 W to 50 W in most 15 W-designed motherboards. TDP-Up limits the base clocks, which we do understand. PL2 is a completely separate power level, completely unrestrained to whatever tiny TDP was given.
-- Comet Lake & Ice Lake boost **FAR** beyond 25 W. Dell's XPS 13 2-in-1 i7-1065G7 turbo boosts well past 40 W.
In the end, all the major questions posited by Anandtech's "analysis" are left unanswered. It's trying to reverse engineer a Windows dll on a TI-84 Calculator. It's possible, but not with the information & tools given.
1. Were all power & heat variables kept the same between these two tests? Heatsink size, fan size & speed most of all, but all the "little" stuff too (TIM, ambient, heatpipe length). When boosting to 50 W in such tiny systems, a small thermal dissipation difference can significantly alter boost speeds.
2. Most importantly and most critically, was this done in a typical laptop, that can't sustain a 50 W boost for very long? Or in the ridiculous, non-typical behemoth Intel SDS reference design that could ideally live in PL2's heat output perpetually?
If it's in SDS, I imagine the hex-core can scrap by wins not uncommonly. In thermally constrained environments = every 15 W-laptop design, I'd be curious how aggressively the Tau & PL2 are set, versus Intel's "ideal" system.
In conclusion, the i7-10710U is probably actually faster in a few cases, but by leaving it an unconstrained thermal system, you exaggerate the differences significantly.
ikjadoon - Friday, January 17, 2020 - link
I'm not trying to "attack" you, as your point (while small in this analysis) is accurate: you can lop off frequency (decrease power) and add more cores (increase power). That's...been true for decades. And it's true in Comet Lake, too:https://www.techspot.com/review/1926-intel-core-i7...
// as an aside
It's high time we stop allowing reviewers to test thin-and-light laptop CPUs at insane power output. Who honestly wants 50 W just cascading through a thin metal chassis onto, presumably, your thighs? Overly aggressive PL2 states and too long Taus put out "great" benchmarks, but these things are nowhere near comfortable for hand-use at that point.
Laptop CPUs have different (and more interesting) constraints.
But, I do understand: Anandtech, every other media outlet, and consumers have been denied 10nm+ silicon for nearly half-a-decade, so I sympathize wanting to run serious analysis on whatever is available.
But, we should do these with systems in hand, instead of Intel's rightfully skewed benchmarks.
ikjadoon - Friday, January 17, 2020 - link
*rightly notedFataliity - Sunday, January 19, 2020 - link
Here's the thing. WE ALREADY GOT SYSTEMS IN HAND!???https://www.notebookcheck.net/Our-first-Ice-Lake-C...
1065G7 loses in every CPU test and only wins in GPU tests vs a Core i7-8559U (4.5Ghz boost, 4core/8thread).
It also has the 15/25W scores. And the 15W scores lose to the 8000 series even worse.
So I dont understand the issue? The ice Lake CPU is released. And Comet Lake is basically a refresh of the 8000 series with slightly better turbo's and efficiency. So if an 8000 series beats Ice Lake at equal cores/threads, then a 10000 series with 2 more cores is going to destroy it.
AT THE SAME PRICE.
eva02langley - Thursday, January 16, 2020 - link
Honestly, this just show how disastrous 10nm really is.Zizy - Thursday, January 16, 2020 - link
Yeah, this was pretty expected and shows that 10 nm is still pretty bad.But I don't think people will feel performance difference as much as they will notice lower power (= better battery life) offered by LPDDR4 and other stuff present in Ice.
ksec - Thursday, January 16, 2020 - link
I dont even have time nor want to bother to verify Intel's data. Could anyone point out anything doggy from those Intel data against Ryzen 3700U? Were they in different Wattage profile?Fataliity - Thursday, January 16, 2020 - link
They were against the Zen+ part, not the new 4000 series (Zen2). So obviously they won. Zen+ had lower IPC than 9th gen / 10th gen, way lower than Ice Lake.Irata - Friday, January 17, 2020 - link
Well, the top end system costing over $3k has two SSD in raid, an RTX 2080 that boosts very high (looks like desktop speeds), 32 GB RAM....basically top of the line everything.Apart from that there wasn't anything dodgy.
The_Assimilator - Friday, January 17, 2020 - link
But was there anything doggy?yeeeeman - Thursday, January 16, 2020 - link
It is pretty clear that they are in a difficult position, in which they have this new product but it can't outperform the old one. The main issue here is that Intel simply cannot sell Ice Lake in 500$ laptops because it is based on a very expensive and very long delayed process.So yeah, no real surprise here. They do what they can in these circumstances.
Hulk - Thursday, January 16, 2020 - link
Seems like most laptops these days are thermally constrained. So without having each CPU in the same chassis it's really impossible to determine which is faster in the "real world."Irata - Thursday, January 16, 2020 - link
If you look at actual reviews, most will give you battery run time at idle and light load (i.e. web browsing and watching a video) while also showing benchmark numbers at full load.If many of them are bursty, i.e. do not require the CPU to run at full speed over longer periods, Comet lake should still come out ahead, as they should manage to go full Turbo for these brief periods.
So you get the best of both worlds in those reviews - high performance and great battery life in idle and surfing the web.
Very few sites actually do show laptop power consumption when gaming or running under high load.
TheNorthRemembers - Thursday, January 16, 2020 - link
Not true really, you can take the CPU out of the chassis.Fataliity - Sunday, January 19, 2020 - link
I don't understand this argument.1) Your buying a laptop
2) You're going to use the laptop the way it was made. Pulling the CPU out of the laptop and scoring higher with more thermal headroom doesn't make a better product. Because that's not how its being used! That's like claiming a 9900KS at 6.5GHz on liquid nitrogen is the reason its better than a 3950X.
Reviews are testing the configurations for sale of the product. For people who buy the product.
Not what the abstract performance "could" be outside of thermally constrained environments (that they come in).
Otherwise, why not just test the desktop variant?
.....The stuff people say.
PaulHoule - Thursday, January 16, 2020 - link
Intel has been listing for years and doesn't recognize this or have any insight as to why.They should hire Charlie Demerjian for CEO or at least head of public relations.
The_Assimilator - Thursday, January 16, 2020 - link
What, so they can lie even more?Irata - Friday, January 17, 2020 - link
That's a rather poor attempt at character assassination, really.The_Assimilator - Friday, January 17, 2020 - link
Charlie's more than adept at assassinating his own character with yellow journalism.Spunjji - Friday, January 17, 2020 - link
While Charlie often engages in what I'd call "aggressive editorialising", I'm not aware that he's ever explicitly lied. On the contrary, he's been proven right about most of the big news he's broken such as Nvidia's Bumpgate and Intel's 10nm woes.I say this in the full knowledge that he is an egotist who deliberately uses the least balanced language possible in his articles.
The_Assimilator - Friday, January 17, 2020 - link
A stopped clock is right twice a day.Fataliity - Sunday, January 19, 2020 - link
But this clock isn't stopped. It's a moving target and he's still getting the GPS coordinates correct.Even if you don't like the way he pictures it or his conclusions, the basis of all of his theories have been right lately.
--And don't use the "He said 10nm was dead" - Because it's obviously not alive now, is it?
If they made 500m in revenue off Ice Lake like some claims, that's 1/34th of their entire quarterly business. That's nothing when laptops are around 30%
Spunjji - Monday, January 20, 2020 - link
He's been right more often than wrong, though, so the "stopped clock" analogy breaks down pretty fast. Love him or hate him, he has good sources and does some serviceable analysis.Kevin G - Thursday, January 16, 2020 - link
Would accelerated workloads leveraging AVX-512 on Ice Lake even change the picture? The AVX-512 base and turbo speeds are supposed to be a thing of the past on Ice Lake. However, has anyone actually measured turbo performance on Ice Lake with and without AVX-512 being used in the same system? If Ice Lake doesn't turbo as high, the wider AVX-512 unit and new instructions may not be enough to actually produce a performance win.Fataliity - Thursday, January 16, 2020 - link
AVX Ice Lake wins. But it comes down to, what is AVX really being used in? Like Intel says in the article, only 7 applications in the past... 4 years? have finally adopted AVX.So it's still pretty useless.
Fataliity - Thursday, January 16, 2020 - link
Ian says*Irata - Thursday, January 16, 2020 - link
Well, that's nothing new for Intel. Remember the slides showing how Cascade Lake X was going to offer up to 2x the performance per Dollar over Skylake.X, their previous HEDT platform ?Those same slides showed that Threadripper 2 already offered up to 1.61x the performance over Skylake-X.
TEAMSWITCHER - Thursday, January 16, 2020 - link
"Intel has worked with the software vendor to enable GPU acceleration." - You say this as if it were a bad thing! Apple does similar software optimizations in Final Cut Pro for the Mac and the results are fantastic. Common tasks can finish in half of the time compared to a similar PC running Adobe's Premiere Pro software. Also, remember Adobe's pricing model is by subscription, while Apple charges only a one time $299 fee for UNLIMITED use. You can see why Adobe has been forced to Optimize Premiere Pro for the Mac, just to remain competitive.I'm seeing this trend a lot in recent reviews, where people focus on CPU aspects to magnify Intel's problems to generate that sensational headline. I get it .. you are vying for an audience. But In the real world.. things are more nuanced. I'm very interested in GPU performance.
Here is another example... My RTX 2080 can render the Blender BMW demo scene in just 51 seconds.. This compares well against AMD's Mighty Threadripper 3970X - A $2000 part. Here too, Blender has been optimized to use the GPU (CUDA and OPTIX) for rendering and the results are fantastic. And what is most surprising ... the GPU fans aren't screaming at me the whole time. My system remains nearly silent, unlike when I render the same scene on the CPU.
Spunjji - Thursday, January 16, 2020 - link
No, they don't say it as if it's a bad thing - they say it as if it's a thing that needs to be weighed against Intel's "real world benchmarking" marketing message, which they go on to discuss in more detail at the end of the article.Basically, if you're on the one hand whining about canned benchmarks that your chips are losing and then, on the other hand, relying on canned benchmarks plus one niche application you've sponsored to *only use your GPU*, then the outcome is hypocrisy.
You either didn't read that, didn't understand, or didn't care and just came here for some Intel Damage Control.
TEAMSWITCHER - Thursday, January 16, 2020 - link
I don't know what is wrong with the computers you buy or build, but I don't find myself waiting on tasks to complete in Word, Excel, or Powerpoint... Those PCMARK tests that they focused on aren't exactly killing my daily productivity. Unlike the two workflows that I mentioned... Blender Rendering and Video Editing.Spunjji - Thursday, January 16, 2020 - link
This... doesn't make sense as a response to me. I also think the PCMARK tests are useless, even though they're sort-of "real world" according to Intel's hazy definition.My point was that you totally misread what the article was saying. They're not criticising GPU acceleration, they're criticising the fact that Intel are putting a GPU accelerated test that they sponsored into their CPU speed comparison.
TEAMSWITCHER - Thursday, January 16, 2020 - link
I did NOT misread the article.. these are mobile parts that have an integrated GPU. Vendors like Apple have applications that take advantage of that iGPU. YOU ARE NOT MAKING ANY SENSE.milkywayer - Friday, January 17, 2020 - link
Stop with the Intel apologising. They screwed up and are losing on most of their own metrics. 99% of theor users don't utilize avx512 and thats where AMD has them beat.TEAMSWITCHER - Friday, January 17, 2020 - link
I'm not Intel Apologizing. I just don't understand why Anandtech would eliminate GPU benches from their tally of winning benchmarks on mobile parts, where the iGPU is an important factor for performance... Well, actually I do know why... It was to generate that click-bait headline.levizx - Friday, January 17, 2020 - link
YOU ARE NOT MAKING ANY SENSE. Nobody said GPU acceleration is bad. It's just NOT COMMON or in any REAL WORLD applications on Windows PC.TEAMSWITCHER - Friday, January 17, 2020 - link
The Integrated GPU on these mobile devices is going to be the ONLY GPU on devices that use these parts.Fataliity - Sunday, January 19, 2020 - link
Here's the part your missing I think Teamswitcher.The reason it was removed, is because its comparing the Ice Lake architecture versus Comet Lake, not Gen 9 versus Gen 11, vs XE.
It's comparing their advancements on the CPU front, and obviously its bad. The only part that's better is the GPU.
This isn't a laptop review. It's an architecture review.
Spunjji - Friday, January 17, 2020 - link
*shrug* You do you, buddy. Projection is weird.lmcd - Saturday, January 18, 2020 - link
The GPU's advantages are only constrained to Ice Lake because Intel didn't port the GPU to 14nm.The article is primarily denoting that the clockspeeds from 14nm paired with the memory bandwidth from LPDDR4X mean that a last-generation node is delivering a more powerful CPU than the architecture improvements.
A similar-size GPU might still beat the newer generation of graphics, also due to clockspeed benefits, but we don't know that because the number of EUs in each part is dramatically different.
Spunjji - Thursday, January 16, 2020 - link
The tests you mentioned would be pretty "real world". They would also perform like a dog on Intel's iGPU, hence why they didn't use them and used a pocket benchmark instead.0ldman79 - Thursday, January 16, 2020 - link
The optimizations are definitely a bonus but if they only exist in 6 apps then what good are they?I mean how long was NVENC out there before hardware acceleration was a thing?
0ldman79 - Thursday, January 16, 2020 - link
I didn't quite phrase that like I wanted.NVENC was released with Kepler.
It was just a couple of years ago that NVENC was added to Handbrake and Staxrip, before that it was added to a few commercial/freeware apps with mixed results.
It's only been a quality option for a short while now. Initially the only software that supported NVENC had little to no quality options aside from bitrate.
levizx - Friday, January 17, 2020 - link
NVENC is useless in almost all use case other than converting video for your phone and realtime recoding while casting. Casting wasn't even a thing back then and streaming already took over.m53 - Thursday, January 16, 2020 - link
“ I'm seeing this trend a lot in recent reviews, where people focus on CPU aspects to magnify Intel's problems to generate that sensational headline. I get it .. you are vying for an audience.”Good point. I am seeing similar practice to many other sites as well. Catchy headlines and colorful opinions instead of presenting an information or test result as is. I guess it pleases the fanboys and generate traffic.
Spunjji - Friday, January 17, 2020 - link
You see similar things on other sites? Great. It's not happening here. The headline is "Intel's confusing messaging - is Comet Lake better than Ice Lake". The article provides analysis of Intel's own publicly available figures in an attempt to answer that.What, precisely, would you prefer?
Spunjji - Friday, January 17, 2020 - link
You know the trend I see? Obtuse comments that don't relate to the article in question but instead pick on some vague theoretical "trend" visible only to the commenter, which generate equally bland "won't somebody think of the children" style responses that don't actually move any discussion in any particular direction.You're agreeing with a guy who's complaining that an article about CPU performance talks about the performance of CPUs. It doesn't get much more inane than this.
m53 - Saturday, January 18, 2020 - link
“I guess it pleases the fanboys and generate traffic.”Your 20+ comments on a “marketing messaging” related opinion piece has proven my point.
Spunjji - Monday, January 20, 2020 - link
All you've "proven" is that these threads are being stacked with the kind of comments that piss me (personally) off, and that I've inexplicably taken it upon myself to respond to most of them. None of that demonstrates anything about Anandtech or the reasoning behind their editorial decisions, which was your claim.Your inane replies continue to add to the body of evidence in favour of my point, namely that the threads are being astroturfed by gormless armchair critics and shills with nothing to add.
ywyak - Thursday, January 16, 2020 - link
> Ryan and I spent some time discussing these results.Ryan Shrout or the lovely Editor of Anandtech, Ryan Smith?
In any case, could you update your answer on the article for a future references?
Ian Cutress - Thursday, January 16, 2020 - link
He yes perhaps I should clarify. It was Ryan Smitheva02langley - Thursday, January 16, 2020 - link
ROFL... Ryan Shrout... the lowest common denominator of humanity.Irata - Friday, January 17, 2020 - link
Thank heavens.Spunjji - Thursday, January 16, 2020 - link
What an absolutely dismal showing for Intel PR.The "real-world benchmark" push would be great if they were being honest about it, but as it stands it's a painfully obvious case of "do as I say, not as I do".
That gaming slide in particular is the worst kind of misleading junk. They kick the graph off with 2 synthetic tests that are famous for producing results that naively (and unrealistically) scale with threads and clock speed, then follow with a "combined" test that's just one of those results balanced with the GPU results. The worst bit is that the remaining results would still give them a win based on a fair comparison, yet they still felt the need to shit up the graphs by highlighting that apples-to-steroid-enhanced-oranges trick they pulled with the GTX 2080. Their justification would only really make sense if they had separate graphs for "highest performance offerings". Grim stuff.
Three cheers for Ian putting the effort in to compare the CPU performance graphs and give us the Ice / Comet comparison. Not only is it genuinely useful, it makes their futile efforts to obfuscate the data all the more amusing.
maroon1 - Thursday, January 16, 2020 - link
A stupid articleJust because Intel said that real world application is what matters the most does not mean they can not show some advantage in some professional niche application. Their own benchmarks have mix of both. Noting wrong with that.
m53 - Thursday, January 16, 2020 - link
I have started reading many 3rd party opinion pieces with lots of grain of salts. Never thought Anandtech needs to be added in that list. Oh well...eva02langley - Thursday, January 16, 2020 - link
I though you were sarcastic until I realize that you are only a shill.eva02langley - Thursday, January 16, 2020 - link
You mean, a stupid process node...Spunjji - Friday, January 17, 2020 - link
Ian's (valid) criticism about Intel's choice of benchmarks was mild and constitutes at most 10% of this article, if you're being particularly sensitive. One wonders why so many random low-quality comments chose to focus on that rather than the meat of the article.As it is, your comment equates to this:
"Just because they said one thing and did another, doesn't mean you're allowed to point out that they said one thing and did another."
Truly, we are living in the age of disinformation that somebody could hold such a worthless opinion and feel it worth posting.
dullard - Thursday, January 16, 2020 - link
Is it really that confusing? They are both listed as 10th generation, so their performance should be roughly on par with each other. What would be confusing is if they had different generations yet performed about the same.m53 - Thursday, January 16, 2020 - link
That’s what I thought. The article is trying its best to find an issue with something when there is no issue.eva02langley - Thursday, January 16, 2020 - link
ROFL.... what a shill... so in that case Zen + 3000 mobile and Zen 2 3000 should perform the same from your logic...Spunjji - Friday, January 17, 2020 - link
Pro tip to the operator of the m53 sockpuppet: the whole "I agree with this guy" thing is less obvious when you don't do it multiple times for the same article.m53 - Sunday, January 19, 2020 - link
@Spunjji: Wow you are a self proclaimed “pro” in trolling. Nice. Thanks for letting me know. I will call you “pro troll” then.Spunjji - Monday, January 20, 2020 - link
I simply pointed out that you're bad at trolling - you assumed that I think I'm good at it.In conclusion, you're poor at both reading and trolling.
m53 - Monday, January 20, 2020 - link
@Pro-troll: Well, you are giving away "pro" tip in trolling. Looks like you are very proud to be a pro-troll.Korguz - Monday, January 20, 2020 - link
and you must be too there m53...m53 - Tuesday, January 21, 2020 - link
...and here comes the minion of pro-troll... lolSpunjji - Thursday, January 23, 2020 - link
lol, anyone who agrees with me must be my "minion". You're really good at projection!m53 - Thursday, January 23, 2020 - link
Nope. Some of those will be your alt accounts. After all you are a self proclaimed pro-troll... :DKorguz - Thursday, January 23, 2020 - link
ahh you funny m53...Spunjji - Monday, January 20, 2020 - link
The article proves its central claim using Intel's own evidence, which is that their 10nm processors don't outperform the 14nm ones. Simple stuff, and not very dramatic in presentation or impact.Cue the reply guys griping about drama that's not there - to be precise, "trying your best to find an issue with the article when there is no issue".
m53 - Monday, January 27, 2020 - link
Nice efforts pro-troll. Happy trolling... :DQasar - Tuesday, January 28, 2020 - link
like you yourself are doing ??from what i have read.. you do more of the trolling m53.. then spunjji does.. at least his posts have something.. where yours seem to be mostly bs and fud... but.. let me guess.. your only rebuttle will be that this is just spunjji on a different account.. because you have nothing usefull to say...
Anatun - Friday, January 31, 2020 - link
Anandtech is looking more like a troll site these days. That includes you too Qasar.Qasar - Friday, January 31, 2020 - link
ooo that hurts... compared to a few others on here.. im minor....Korguz - Saturday, February 1, 2020 - link
yea no kidding..m53 - Saturday, February 1, 2020 - link
Welcome minor troll :)Korguz - Saturday, February 1, 2020 - link
hes just trying to lean how to troll from a pro like you m53...Anatun - Monday, February 3, 2020 - link
Nice tag team alts :DKorguz - Monday, February 3, 2020 - link
you to :-)m53 - Tuesday, February 4, 2020 - link
Well, I was expecting something more creative from the pro-troll and the alts instead of copy/paste acquisition. :DQasar - Wednesday, February 5, 2020 - link
yea ok.. cant prove your points against spunjji.. so now.. you are pretty much using garbage now... your posts are more troll like then his.. you seem to have skipped over eva02langley direct relpy to you..and instead have been attacking spunjji and now me.. because you canr prove anything you type..id guess anatun is your alt...m53 - Thursday, February 6, 2020 - link
@alt#2: Looks like you forgot to login from @alt#1 account who I was talking to. Mistakes happen. lol.Qasar - Thursday, February 6, 2020 - link
nope.. you cant prove your points, so you attack spunjji here, and are attacking me in another thread.. so who's the troll ?? when are YOU going to post on one of YOUR other 4 alts that you have on here??? or will you just keep attacking people when you no longer can prove anything ??either way.. have a good day.. no point in replying to you as all yo do now.. is accuse people of crap you cant prove..
m53 - Sunday, February 9, 2020 - link
Spunjji aka Qasar aka Korguz aka Pro-troll: Not sure why you are getting angry. I am actually grateful to you for showing me some pro-tips. Sad that you won't reply to me anymore. Will miss your pro-tips. :(Orange_Swan - Friday, January 17, 2020 - link
that's what I've been thinking, Intel has basically split their mobile lineup into :Compute Core - better computing performance
Graphics Core - better graphics performance
ModEl4 - Thursday, January 16, 2020 - link
What is the meaning of the article?It seems like a hit at the marketing guys at intel. But aĺl that Ian is saying is true, i just don't understand why now he is actively criticizing Intel's marketing department, i mean in the past also, Intel's marketing had some low points but we didn't see the same judgemental stance. I'm surprised he didn't mention that the comparison with the new Comet lake stepping supporting LPDDR4X is potentially going to scale the results even more favourably for Comet Lake parts. Anyway i don't want to sound too judgemental, overall good article!
dullard - Thursday, January 16, 2020 - link
The article's point is to continue to spread Ian's frustrations with Intel. Its like his tarnished Xeon 9200 article, where Ian's only real complaints are that the price isn't public and that he didn't get a free one. It is just a "woe is me" post, move along. This site was better when opinions were out of the articles and it was just the facts.https://www.anandtech.com/show/15149/how-to-tarnis...
Fataliity - Thursday, January 16, 2020 - link
The point is that Intel is hyping its 10nm as "18%" better IPC, everyone says its so much better than 14nm++.And the article is asking, Is it though really?
Ice Lake, lower turbo higher IPC
10th gen higher turbo lower IPC
1065G7 $426 - 4 cores
10710-u $443 - 6 threads
Not to mention Ice Lake's power draw is insane (but clock gating gives it a good battery life for benchmarks along with LPDDR4X and 1watt screens)
Ice Lake is being marketed as a premium product, but its only $15 cheaper than the 6core part which is all around better except in graphics. So who would really want the 4 core over the 6 core?
Obviously manufacturers see this too, because there aren't that many Ice Lake laptops out.
Then add in the competition Zen2.
8c/16t 15W TDP at $399 tray price.
---- TL;DR
When your telling your investors and the public "Everythings fine. Ice Lake's soo much better than 14nm", But then your own cherry-picked slides show 10th gen 14nm being better in Ice Lake in everything except graphics, and has 2 more cores.
Doesn't look good.
Fataliity - Thursday, January 16, 2020 - link
And "overclocking" doessn't count here. Because its a laptop. So regardless of the IPC boost, if it ships with a 1GHz lower boost clock, it's going to stay 1GHz lower forever.Fataliity - Thursday, January 16, 2020 - link
And that's not including, if your buying a laptop with a $400 cpu, most likely it will get discrete graphics (Mx250 or MAX-Q Nvidia).So then Ice Lake loses ALL of its lead.
s.yu - Thursday, January 16, 2020 - link
First-hand experience here:Refurbished 1065G7/16GB/512GB/1080P XPS13 2-in-1 is much harder to buy, and ~$150 more expensive than
refurbished 10710U/16GB/1TB/4K regular XPS13.
The only advantages of the 2-in-1, aside from the CPU and LPDDR4, are the digitizer and a different hinge.
eva02langley - Thursday, January 16, 2020 - link
You are replying to your posts? Are you paid by Intel to spread fud?Fataliity - Thursday, January 16, 2020 - link
Did you even read the comment? Obviously he's saying that the 1065G7 is $150 more than the 10710U which has 6cores and 12 threads.And if you look at Ian's benches, it really isn't better in anything at all except graphics. It's not an upgrade.
And so am I. If you read what I wrote.
So who is paid by Intel? Both comments are anti-Intel marketing/pricing if you actually read them.
*sigh*
dullard - Thursday, January 16, 2020 - link
Intel's own graphics show a non-noticeable <5% bump over two generations: https://images.anandtech.com/doci/15385/Blueprint%...So, you turn that into "Ice Lake's soo much better than 14nm"? Seems like you are inventing what you want to read and not what Intel says.
Fataliity - Thursday, January 16, 2020 - link
Did you read past the first line? What level is your reading comprehension?The "quoted" part is what an Intel fanboy would say about Ice Lake being their savior.
Every single line below picks apart that statement.
Seriously. This is anandtech. Since when don't you guys read what your replying to and take the "fanboy" argument?
dullard - Friday, January 17, 2020 - link
I see your problem now, you don't want to actually discuss facts. Here is what you posted with your so-called quote:(A) "When your telling your investors and the public 'Everythings fine. Ice Lake's soo much better than 14nm'.
And now you are posting:
(B) "The 'quoted' part is what an Intel fanboy would say about Ice Lake being their savior"
Which is it? Is it (A) Intel telling their shareholders or (B) what an Intel fanboy would say? You can't have it both ways.
Spunjji - Friday, January 17, 2020 - link
He really can; on the other hand you're accusing someone being openly critical of Intel of being an Intel shill, and you *really* cannot have that both ways.Fataliity - Sunday, January 19, 2020 - link
Thank you. you explained it better than I could what I'm trying to explain to this guy. :)Fataliity - Sunday, January 19, 2020 - link
The quotes " " " " denote what it is. It is pretty obvious lmao. I'm surprise that you can't understand it. is english not your first language?Fataliity - Sunday, January 19, 2020 - link
Oh, and it's both.All the Intel fanboys claim "Ice Lake's great, its going to kick ass"
Intel in all of its slides is promoting it as its best product.
Even the products its being put in are more expensive.
But yet. It's not.
Fritzkier - Thursday, January 23, 2020 - link
Oh God, I laughed so much. What the hell is wrong with this people. Even I, as a non native speaker can understand what Fataliity said...You can't have both shills and anti at the same time lol.
eva02langley - Thursday, January 16, 2020 - link
Hmm, someone is drinking the Intel propaganda coolaid it seems.Well, there is no way anyone could defend Intel at this moment... NO WAY... unless you are shilling.
Spunjji - Friday, January 17, 2020 - link
No, the article's point is that they shared figures that tell an interesting story but refused to tell that story themselves, and he was wondering why.The "why" is that the story it tells doesn't look good for their product.
That was all in the article and I feel weird at having to explain it when it was written there, in English, the same language we are all commenting in here.
This site was never free from opinion. There used to be flame wars over how partial Anand himself was to Apple products. Take your "I miss the good old days" nonsense and can it.
hubick - Thursday, January 16, 2020 - link
One of the good things about Ice Lake is the Thunderbolt 3 / USB4 integrated into the CPU, offering up to 4 ports which aren't forced to share a measly PCIe 4x connection to the PCH like Titan Ridge.Except, being one of the major benefits, you'd think manufacturers would be ready to offer laptops w those 4 supported ports, to finally compete w the MacBook Pro, and you'd be wrong. Not. A. One. :-(
Korguz - Thursday, January 16, 2020 - link
who cares about thunderbolt.. not everyone needs or wants it.. for some its feature check point.. for others.. who cares... it also seems the reason why amd doesnt have it.. is because intel needs to certify it... and for some reason. intel wont cerify it for amds platforms.. some one mananged to get it to work on amds systems but had to use a add in card with a hacked bios/driver for it to work...m53 - Thursday, January 16, 2020 - link
Actually plenty of people care about thunderbolt. I have seen comments like, “who cares about a laptop”, “who cares about desktop”, “who cares about a PC”, and so on. Just because you don’t use it doesn’t mean there aren’t plenty of people who want to use it.Korguz - Thursday, January 16, 2020 - link
and the same can be said the other way as well.. no one i know cares if a comp has it or not.. but you may know peope that want it... i just find it seems there is too much weight on if a comp has it or not.. like if it doesnt have it.. then its garbage and not worth buying..eva02langley - Thursday, January 16, 2020 - link
NOBODY.... I mean NOBODY... is buying a 1000$+ laptop for a single port... nobody...hubick - Thursday, January 16, 2020 - link
I just bought an $8K ThinkPad P53, and my purchase decision would be vastly affected by integrated Thunderbolt.tamalero - Friday, January 17, 2020 - link
So, you bought that laptop specifically for the thunderbolt?would you buy a crap tier laptop just because it had thunderbolt?
hubick - Friday, January 17, 2020 - link
No, I didn't buy it just for TB3, but I definitely wouldn't have bought it without TB3, and if there were a similar model with TB3 off the CPU, that would be the one I'd get.hubick - Thursday, January 16, 2020 - link
Are you complaining about the AVX-512 instructions too? Cuz not everyone uses those. Intel is struggling to find *anyone* to use those.Lots of people have laptops, and lots of people are interested in eGPU's for gaming or graphics work on their laptop. That's who cares about Thunderbolt.
And where did I even imply *everyone* had to care about it? I'm merely re-enforcing the article, in that, if you take away perf, does Ice Lake still bring anything to the table? So, if you were going to answer Thunderbolt, cuz you were excited for those 4 high speed ports it enables, you can forget that, cuz OEM's are lazy and haven't bothered to update their Ice Lake designs to even take advantage.
Spunjji - Friday, January 17, 2020 - link
The OEM still has to figure out port positioning, power and signal routing... etc. It's a better situation than it was before, but implementing Thunderbolt is still tricky and still adds cost, whereas most PC OEMs would rather save a few cents on BoM. :/HeyYou,It'sMe - Thursday, January 16, 2020 - link
It's kind of weird that the test systems aren't listed in the article here, as the cooling on one seems to be significantly better than the other.The test systems are:
Core i7-1071U - Dell XPS 13 7390
Core i7-1065G7 - Dell XPS 13 2-in-1 7390
Both are production systems.
Ian Cutress - Thursday, January 16, 2020 - link
It's kind of weird that the AMD system isn't listed in the comment here, as given it's the same it means that Intel approximated both Intel systems as examples of the AMD system.So if
AMD system was ~ Ice
and
AMD system was ~ Comet
But you're saying Ice is not ~Comet?
Then the comparisons with AMD are a bit FUD then.
You can't have it both ways.
dullard - Friday, January 17, 2020 - link
But, Ian, Intel itself has said Ice ~ Comet. That is why they are both 10th generation.You are over thinking it. It is as if you want Intel's 10 nm to be better than 14 nm+++, and then are sad that it isn't better. This is one of the few times that Intel's marketing is quite clear. They are both 10th generation and will trade punches depending on the application.
Tamz_msc - Friday, January 17, 2020 - link
Intel doesn't make a direct comparison between Ice and Comet in these slides, yet when you put them side by side, which this article does, Comet comes out ahead in more meaningful real-world tasks than Ice. That's precisely the point of the article.dullard - Friday, January 17, 2020 - link
At one point, Anandtech had a rule-of-thumb that differences less than 10% were not noticeable. In general, I agree with that point. The only real tests where that is not valid tend to be calculations that take days/weeks/months and aren't even included in reviews anyways.Given that rule-of-thumb, only two tests were significantly different: PowerPoint PDF Export and Word Mail Merge Error Check. Neither of which are that important of a use case and Comet won one while Ice won the other.
So, I put them side-by-side and still see them as basically equivalent.
Tamz_msc - Saturday, January 18, 2020 - link
That's just you rationalising the fact that one is performing better than the other on average. Exporting PowerPoint presentations as PDF is an extremely common use case in any corporate or academic environment.If 10 percent doesn't matter in cases except where calculations where it takes a long time, why is the 9900K for example still the recommended CPU for high-refresh gaming?
So no, 10 percent difference is not equivalent under any circumstance.
Korguz - Saturday, January 18, 2020 - link
maybe because of the perception that clockspeed = performance when its only half of the performance equation ?Spunjji - Monday, January 20, 2020 - link
When it comes to tech, isn't "overthinking it" what Anandtech is for?Your use of emotive language to describe Ian's findings tells us more about how you perceive this than it does about the article. Ian's not "sad" Ice Lake isn't better, he literally just used Intel's own numbers to demonstrate that fact.
dullard - Friday, January 17, 2020 - link
Put yourself in the shoes of the masses who don't follow the intricacies of processors. Both cost almost the same, both are 10th generation, both perform about the same. Why should Intel try to differentiate them? Trying to compare and contrast the two would just confuse the general public when they should be considered roughly equivalent CPUs (with some differences depending on exact use case).The only reason to keep both lines is so that Intel can keep up with the high demand for processors in this time of production limitations.
Tamz_msc - Friday, January 17, 2020 - link
Six wins for Comet, three for Ice(one of them being irrelevant, as the article correctly argues) and two draws. That's not roughly equivalent by any metric.m53 - Sunday, January 19, 2020 - link
@Tamz: Ice lake comes with better wifi, better battery life, and TB integrated in CPU. Now that’s 6 vs 6 win. I would call that roughly equivalent.Tamz_msc - Sunday, January 19, 2020 - link
Who says Ice has better battery life? As for the other two they're features, not wins in terms of performance.dullard - Sunday, January 19, 2020 - link
The problem with counting wins instead of looking at win magnitude is that I could run 10,000 essentially similar benchmarks with CPU #1 in a 0.1% lead and 10 benchmarks with CPU #2 in a 100% lead. So, does that make that CPU #1 a thousand times better? No, if there was any winner it would be CPU #2.Tamz_msc - Sunday, January 19, 2020 - link
Except in this case Comet has the higher win magnitude as well.dullard - Sunday, January 19, 2020 - link
What really matters is breadth and depth. How many completely different applications is a CPU ahead in, and is the lead significant enough to be noticeable by a human? Then you can say a CPU is in the lead. Here, only 2 benchmarks would be noticed by a human and they are split with each CPU winning one.Fataliity - Sunday, January 19, 2020 - link
I agree with what your saying here.But at the same time, Intel picked these benchmarks to be meaningful. Not reviewers.
So this should basically show the full picture, if these are what was chosen as represenative.
(And it was against a Zen+ APU, so obviously it was going to win either way. There was no need to skew the applications used)
djmcave - Friday, January 17, 2020 - link
Maybe the problem is all those lakes... at 3 lakes a year who can keep up..Korguz - Friday, January 17, 2020 - link
its really time intel changed its code names and dropped the " lakes "....evilpaul666 - Friday, January 17, 2020 - link
Console emulation testing... Would you guys consider adding PS3? RPCS3 isn't as far along as Dolphin, but it takes better advantage of >4 threads and is more demanding.ppi - Friday, January 17, 2020 - link
While exporting PowerPoint to PDF is realistic workload, it's also pretty much irrelevant from performance perspective. Today I was exporting 300+ page presentation to PDF and on laptop with i5-8350U it took less than 1.5 minute. That's (i) less time than I need to write cover e-mail, (ii) irrelevant from perspective of how much time went into creating the presentation. With respect to multithreading, only 1 and half thread was active.That does not mean PowerPoint is "solved problem" from performance perspective. There are still things that bug me, though we are talking mostly about "snappiness". None of these things are snappy enough: loading file, quick view in slide sorter view, quick transition between modes, copying/moving larger number of slides, synchronization over SharePoint, and saving files.
s.yu - Saturday, January 18, 2020 - link
Ah ha, good point.eastcoast_pete - Saturday, January 18, 2020 - link
One task I have to do frequently on my work laptop which can take a while is to do the reverse: convert a (big) PDF into a WORD or PowerPoint file so I can edit it. That can get my current i7 (U) hot enough to get the fan running at full tilt, and take a while. I wonder if that would be of interest to others, too?zodiacfml - Saturday, January 18, 2020 - link
🙄well yeah, according to your first review of Ice Lake. Intel's fault here is the lack of processing cores in Ice Lake despite the smaller die size. It is too small vs Comet Lake, should have been at least 6 Ice Lake vs 6 Comet Lake cores.29a - Saturday, January 18, 2020 - link
I've converted PPT to video a few times.eastcoast_pete - Saturday, January 18, 2020 - link
So, what is the single core/single thread performance across several CPU-limited tasks of Ice Lake and Comet Lake laptops specified for the same TDP? Is there such a tabulation? That kind of information would answer at least just how comparable they are across various laptop manufacturers. Based on Ian's numbers, the "mature" 14 nm Comet Lake looks like it probably holds its own against Ice Lake, and beats it for several tasks due to higher single core turbo frequency. But, again, a table with data from several systems for each would be helpful.Fataliity - Sunday, January 19, 2020 - link
eastcoast_peteHere is what your looking for.
https://www.notebookcheck.net/Our-first-Ice-Lake-C...
It's not 10th gen, but even against 8th gen it is shocking. And the 1065G7 is in an SDS which is going to have better thermal headroom than probably all production parts.
And yet...
Cinebench Single
Core i7-8559u (4c/8t 4.5boost) 188 points
Core i7-1065G7 (4c/8t 3.9boost) 183/176 (25/15W)
Cinebench Multi
8559u - 671 min 798 max (probably thermal limiting?)
1065g7 - 751/521 (25/15W)
There's alot more inside the link if you want to see how it compares to the old gen. The new gen has 400mhz higher boost, and 2 extra cores.. :-(
Fataliity - Sunday, January 19, 2020 - link
Here's a direct comparison.Now remember, the Ice Lake PC's are the "premium and more expensive"
And the 10710u is the "budget".
(Benchmarks say otherwise)
https://technical.city/en/cpu/Core-i7-1065G7-vs-Co...
Beaver M. - Sunday, January 19, 2020 - link
Oh boy, Im starting to avoid articles and reviews about Intel. It makes me cringe so hard, seeing how low Intel has fallen.When are we supposed to get 7 nm again? 22? Or was it 25?
Oh boy... First time since 20 years that I am considering an AMD processor again.
Guspaz - Monday, January 20, 2020 - link
Are these being compared with like-for-like TDPs? Both the AMD and Intel chips are theoretically 15W parts, but the R7-3700U has a cTDP range of 12-35W and the i7-1065G7 has a cTDP range of 12-25W. Since it doesn't specify on the comparison slides (and the URL they contain is out of date), for all we know we could be looking at a 12W R7 versus a 25W i7.I also find the GPU acceleration figures a bit odd. I would expect the Vega 10 to significantly outperform the Iris Plus... Or is the Topaz Labs test supposed to be AVX512?
Spunjji - Monday, January 20, 2020 - link
The Topaz Labs test appears to be written specifically to benefit from Intel's GPU. Vega 10 trades blows with the GPU in the 1065G7 - it tends to win in real-world benchmarks, but not by a lot.not_anton - Tuesday, January 21, 2020 - link
"For a technology like AVX-512 to only have eight enhanced consumer applications several years after its first launch isn’t actually that great": AVX-512 is not widely available in consumer CPUs. Previous-gen Ryzen did not natively support even AVX-256 and did fine in consumer segment.AVX-512 is used automatically for vector operations in any compute-heavy software, so you could say that it covers 99% of use cases where it's actually important.
PCWarrior - Monday, January 27, 2020 - link
I think you are misinterpreting and mispresenting Intel’s rhetoric about real-world benchmarking. They by no means told the press to exclude niche applications from their test suites. Intel’s message to the press was instead the twofold. First, the press needs to stop treating Cinebench as the ultimate benchmark, as if it were representative of most real-world workloads. Cinebench is instead only representative of performance in Cinema 4D and arguably a few other tile-based rendering programs. Now Cinema 4D is, of course, a real-world application but according to Intel it ranks below 1000 in popularity so it makes little sense for it to be treated as representative of performance of over 1000 other more popular applications. Second, the press needs to have a richer and more diverse test suite and include more real-world workloads to help people in their purchasing decisions by including software and workloads they will likely use. That test suite may still very well include Cinebench, but it needs to be 1 benchmark in, say, 30-40 (and that with the appropriate weighting), not 1 in just 3-4 (with almost of the productivity weighting). If you are just going to include 3-4 benchmarks, you better pick software/workloads that are far more popular than tile-based rendering.alufan - Monday, January 27, 2020 - link
and intel will of course follow the same rules?they have not so far
Korguz - Tuesday, January 28, 2020 - link
" the press needs to stop treating Cinebench as the ultimate benchmark " thats funny.. didnt intel use this same benchmark to tout how much faster its cpus were over AMDs before zen came out, but now that is shows amd in the lead, they no longer are considering relevant ? come on, intel is back to cherry picking benchmarks to try to show its products are better then amds. and who knows what else it will also do now.. PCWarrior you still believe intels lies and BS??Anatun - Friday, January 31, 2020 - link
Intel market two similarly performing products as 10th gen. They literally never claimed ice lake to be better than comet lake. It is apparent from their 10th get lineup. Both Ice Lake and Comet lake are 10th gen. One is better in raw CPU performance while the other is better in graphics, connectivity, and platform features. Why a tech site publish a hit piece arguing over it is beyond me.