Comments Locked

253 Comments

Back to Article

  • tipoo - Thursday, September 17, 2020 - link

    “Baskin for the exotic”
    I see what you did there...
  • ingwe - Thursday, September 17, 2020 - link

    I didn't get it until I read your comment.
  • Luminar - Thursday, September 17, 2020 - link

    RIP AMD
  • AMDSuperFan - Thursday, September 17, 2020 - link

    "Against the x86 competition, Tiger Lake leaves AMD’s Zen2-based Renoir in the dust when it comes to single-threaded performance." - But I am hoping Big Navi can compete well against this Intel chip.
  • tipoo - Thursday, September 17, 2020 - link

    What does Big Navi have to do with a laptop CPU?
  • AMDSuperFan - Thursday, September 17, 2020 - link

    You care about games don't you? This Intel Tiger won't have an answer for Big Navi. We can look forward to that showing who is the boss.
  • blppt - Thursday, September 17, 2020 - link

    Based on preliminary data, they'll both be about 2 years behind Nvidia, what with Big Navi only matching a 2080ti, and not available for another month at the earliest.
  • hecksagon - Friday, September 18, 2020 - link

    Crazy how you can make that prediction, the only preliminary data that is out is a photograph of the card. Are you a wizard?
  • blppt - Friday, September 18, 2020 - link

    Incorrect.

    https://wccftech.com/amd-radeon-navi-gpu-specs-per...
  • HarryVoyager - Friday, September 18, 2020 - link

    I'm not really seeing where you are getting that from. We know that RDNA2 can hit 2.23Ghz from the PS5 implementation, and we have solid rumors that it the top end one will be an 80CU chip, rather than a 40 CU chip. That implies on the order of a 230% improvement over the 5700XT, if their are no other performance improvements. That alone puts it in the 30-40% improvement range over the 2080 Ti. Given we've already seen at least a few AMD benchmarks of unidentified cards showing a 30-40% improvement over 2080 To performance, that sort of lift does seem likely.

    If I had to guess, that RDNA2 that recently showed up with a near 2080 TI performance is probably a 6700 competitor to the 3070, not the top end card. Those do have to be developed and tested too, after all.
  • blppt - Friday, September 18, 2020 - link

    Yeah, we can extrapolate such things if power consumption and heat dissipation are of no relevance to AMD. You're leaving out other factors that go into building a top line GPU.
  • AnarchoPrimitiv - Saturday, September 26, 2020 - link

    Power? It will certainly be better than Ampere which is awful at efficiency... Are you forgetting that RDNA2 will be on an improved 7nm node, meaning a better 7nm node that RDNA2?
  • Spunjji - Friday, September 18, 2020 - link

    Big Navi probably won't clock that high for TDP reasons, but the people who are buying that it's only going to have 2080Ti performance are in for a rude surprise. It should compete solidly with the 3080, and I'm betting at a lower TDP. We'll see.
  • blppt - Saturday, September 19, 2020 - link

    Its been AMD's modus operandi for a long time now. Introduce new card, and either because of inferior tech (occasionally) or drivers (mostly), it usually ends up matching Nvidia's last gen flagship. Although also at a lower price.

    Considering the leaked benches we've already seen, Big Navi appears to be more of the same. Around 2080Ti performance, probably at a much lower price, though.
  • Spunjji - Saturday, September 19, 2020 - link

    @blppt - not sure if you're shilling or credulous, but there's no indication that those leaked benchmarks are "Big Navi". Based on the probable specs vs. the known performance of the 3080, it's extremely unlikely that it will significantly underperform the 3080. It's entirely possible that it will perform similarly at lower power levels. They're also specifically holding back the launch to work on software.

    In other words: assuming AMD will keep doing the same thing over and over when they already stopped doing that (see: RDNA, Zen 2, Renoir) is not a solid bet.

    But none of this is relevant here. It's amazing how far shills will go to poison the well in off-topic posts.
  • blppt - Sunday, September 20, 2020 - link

    Considering that the 2080ti itself doesn't "significantly underperform the 3080", Big Navi being in line with the 2080ti doesn't qualify it as getting pummeled by the 3080.
  • blppt - Sunday, September 20, 2020 - link

    Oh, and BTW, I am not a shill for Nvidia. I've owned many AMD cards and cpus over the years, and they have been this way for a while. I keep wishing they'll release a true high end card, but they always end up matching Nvidia's previous gen flagship.

    Witness the disappointing 5700XT in my machine at the moment. Due to AMD's lesser driver team, it often is less consistent in games then my now ancient 1080ti. Even in its ideal situation with well optimized drivers in a game that favors AMD cards, it just barely outperforms that old 1080ti. Most of the time its around 1080 performance.

    Actually, YOU are the shill for AMD if you keep denying this is the way they have been for a while.

    "In other words: assuming AMD will keep doing the same thing over and over when they already stopped doing that (see: RDNA, Zen 2, Renoir) is not a solid bet."

    Except---they STILL don't hit the top of the charts in games on their CPUs. Zen/Zen 2 is a massive improvement, and dominates Intel in anything highly multi-core optimized, but that almost always never applies to games.

    So, going to a Zen comparison for what you think Big Navi will do is not a particularly good analogy.
  • Spunjji - Sunday, September 20, 2020 - link

    @blppt - "I'm not the shill, you're the shill, I totally own this product, let me whine about how disappointing it is though, even though performance characteristics were clear from the leaks and it still outperformed them. I bought it to replace a far more expensive card that it doesn't outperform". Okay buddy, sure. Whatever you say. 🙄

    I didn't say it would take the performance lead. Going for a Zen comparison is exactly what I meant and I stand by it. We will see, until benchmarks come out it's all just talk anyway - just some of it's more obvious nonsense than the rest...
  • blppt - Sunday, September 20, 2020 - link

    @Spunji

    That was the dumbest counter argument I've ever heard.

    First off, I didn't buy it to 'replace' anything. The 1080ti is in one of my other boxes. Where did you get 'replace' from? The 5700XT was to complete an all-AMD rig consisting of a 3900X and and AMD video card.

    Secondly, the 1080ti is now almost 4 freaking years old. You bet your rear end I'd expect it to outperform a top end card from almost 4 years ago, when it is currently STILL the best gpu AMD offers.

    And finally, I have over 20 years experience with both AMD cpus and gpus in various builds of mine, so don't give me that "bought one AMD product and decided they stink" B.S.

    I've been on both sides of the aisle. Don't try and tell me i'm a shill for Nvidia. I've spent way too much time and money around AMD systems for that to be true.
  • AnarchoPrimitiv - Saturday, September 26, 2020 - link

    You're a liar, I'm so sick of Nvidia fans lying about owning AMD cards
  • blppt - Saturday, September 26, 2020 - link

    Sure, the box sitting right next to my desk doesn't exist. Nor the 10 or so AMD cards I've bought over the past 20 years.

    1 5970
    2 7970s (for CFX)
    1 Sapphire 290x (BF4 edition, ridiculously loud under load)
    2 XFX 290 (much better cooler than the BF4 290x) mistakenly bought when I thought it would accept a flash to 290x, got the wrong builds, for CFX)
    2 290x 8gb sapphire custom edition (for CFX, much, much quieter than the 290x)
    1 Vega 64 watercooled (actually turned out to be useful for a Hackintosh build)
    1 5700xt stock edition

    Yeah, i just made this stuff up off the top of my head. I guarantee I've had more experience with AMD videocards than the average gamer. Remember the separate CFX CAP profiles? I sure do.

    So please, tell me again how I'm only a Nvidia owner.
  • Santoval - Sunday, September 20, 2020 - link

    If the top-end Big Navi is going to be 30-40% faster than the 2080 Ti then the 3080 (and later on the 3080 Ti, which will fit between the 3080 and the 3090) will be *way* beyond it in performance, in a continuation of the status quo of the last several graphics card generations. In fact it will be even worse this generation, since Big Navi needs to be 52% faster than the 2080 Ti to even match the 3070 in FP32 performance.

    Sure, it might have double the memory of the 3070, but how much will that matter if it's going to be 15 - 20% slower than a supposed "lower grade" Nvidia card? In other words "30-40% faster than the 2080 Ti" is not enough to compete with Ampere.

    By the way, we have no idea how well Big Navi and the rest of the RDNA2 cards will perform in ray-tracing, but I am not sure how that matters to most people. *If* the top-end Big Navi has 16 GB of RAM, it costs just as much as the 3070 and is slightly (up to 5-10%) slower than it in FP32 performance but handily outperforms it in ray-tracing performance then it might be an attractive buy. But I doubt any margins will be left for AMD if they sell a 16 GB card for $500.

    If it is 15-20% slower and costs $100 more noone but those who absolutely want 16 GB of graphics RAM will buy it; and if the top-end card only has 12 GB of RAM there goes the large memory incentive as well..
  • Spunjji - Sunday, September 20, 2020 - link

    @Santoval, why are you speaking as if the 3080's performance characteristics are not already known? We have the benchmarks in now.

    More importantly, why are you making the assumption that AMD need to beat Nvidia's theoretical FP32 performance when it was always obvious (and now extremely clear) that it has very little bearing on the product's actual performance in games?

    The rest of your speculation is knocked out of what by that. The likelihood of an 80CU RDNA 2 card underperforming the 3070 is nil. The likelihood of it underperforming the 3080 (which performs like twice a 5700, non-XT) is also low.
  • Byte - Monday, September 21, 2020 - link

    Nvidia probably has a good idea how it performs with access to PS5/Xbox, they know they had to be aggressive this round with clock speeds and pricing. As we can see 3080 is almost maxed, o/c headroom like that of AMD chips, and price is reasonable decent, in line with 1080 launch prices before minepocalypse.
  • TimSyd - Saturday, September 19, 2020 - link

    Ahh don't ya just love the fresh smell of TROLL
  • evernessince - Sunday, September 20, 2020 - link

    The 5700XT is RDNA1 and it's 1/3rd the size of the 2080 Ti. 1/3rd the size and only 30% less performance. Now imagine a GPU twice the size of the 5700XT, thus having twice the performance. Now add in the node shrink and new architecture.

    I wouldn't be surprised if the 6700XT beat the 2080 Ti, let alone AMD's bigger Navi 2 GPUs.
  • Cooe - Friday, December 25, 2020 - link

    Hahahaha. "Only matching a 2080 Ti". How's it feel to be an idiot?
  • tipoo - Friday, September 18, 2020 - link

    I'd again ask you why a laptop SoC would have an answer for a big GPU. That's not what this product is.
  • dotjaz - Friday, September 18, 2020 - link

    "This Intel Tiger" doesn't need an answer for Big Navi, no laptop chip needs one at all. Big Navi is 300W+, no way it's going in a laptop.

    RDNA2+ will trickle down to mobile APU eventually, but we don't know if Van Gogh can beat TGL yet, I'm betting not because it's likely a 7-15W part with weaker Quadcore Zen2.

    Proper RDNA2+ APU won't be out until 2022/Zen4. By then Intel will have the next gen Xe.
  • Santoval - Sunday, September 20, 2020 - link

    Intel's next gen Xe (in Alder Lake) is going to be a minor upgrade to the original Xe. Not a redesign, just an optimization to target higher clocks. The optimization will largely (or only) happen at the node level, since it will be fabbed with second gen SuperFin (formerly 10nm+++), which is supposed to be (assuming no further 7nm delays) Intel's last 10nm node variant.
    How well will that work, and thus how well 2nd gen Xe will perform, will depend on how high Intel's 2nd gen SuperFin will clock. At best 150 - 200 MHz higher clocks can probably be expected.
  • AMDSuperFan - Monday, September 21, 2020 - link

    What really concerns me is this Intel Tiger is built on a 10mm process that looks like it is as good or better than the TSMC 7nm process. https://en.wikipedia.org/wiki/7_nm_process. They do not talk about the +++ superfin variants for node density. We need Super Navi to fight against this. We know for laptops or gaming, nobody can touch big navi. Nvidia cannot. It will be much faster I think than 3090 or even 4090 by everything I see. Nvidia might be out of their league finally and put into their place. I would not be surprised to see Big Navi designs at 150w with 10x the speed.
  • Rtx dude - Monday, September 28, 2020 - link

    3090 is twice the size of 3080. So, it will ebd big NAVI in performance and bebchmarks
  • Showtime - Friday, September 18, 2020 - link

    Sorry, but with your name alone, we can't take anything you say seriously.
  • MrVibrato - Saturday, September 19, 2020 - link

    Oh, but there are quite a few who couldn't resist the bait. Which, and this is one of my guilty pleasures, makes for entertaining reading...
  • hecksagon - Friday, September 18, 2020 - link

    He's referencing the performance improvement we will see in AMD's integrated graphics once the architectural improvements from Big Navi trickle down.
  • dotjaz - Friday, September 18, 2020 - link

    In the meantime, Intel wins for the next 8 months or so. We don't even know if Van Gogh can beat TGL yet.
  • Spunjji - Friday, September 18, 2020 - link

    That's a charitable interpretation, but actually this account is just a sockpuppet for an Intel shill who got tired of waiting for AMD fanboys to show up and so made one of their own. Everything they post is nonsensical junk.
  • AMDSuperFan - Monday, September 21, 2020 - link

    Even as a superfan it seems obvious that our amd cannot fight against this new technology. How is a usurper faster than us on video for gaming? As the owner of ATI, we are a video gaming company. Nvidia with their Voodoo 2 technology has nothing on ATI. My 7900 card is still fast for many games even today - but the same cannot be said for Voodoo 2 cards. But we have big things on the future and we should get back our championship in 2022.
  • PixyMisa - Thursday, September 17, 2020 - link

    This was almost funny once.
  • SarahKerrigan - Thursday, September 17, 2020 - link

    Big Navi is a multi-hundred-watt dGPU. This is a laptop processor. They don't compete with each other at all.
  • Rtx dude - Monday, September 28, 2020 - link

    Thank you
  • StingkyMakarel - Friday, September 18, 2020 - link

    anyone tried running multiple single threaded app on Intel and AMD?
  • dsplover - Friday, September 18, 2020 - link

    Yes. A couple actually which each get a Core assignment. I’m an audio geek that came from using a PC for streaming from HDD’s (Seagate 10k SCSI Cheetahs) to a software based synthesizer enthusiast where single core performance is crucial.

    Started with AMD MPs/Tyan Tiger and Coppermine 1GHz CPU’s.

    I’ve concluded that CPU Cache or 4.4GHz on an Intel is optimal.
    Latency from extra cores causes me to adjust audio buffer sizes to compensate which I noticed on the Intel Quads. Conroe Dual Cores were faster for my core locked synths. A larger CPU cache overcame the inefficiencies when i7 Bloomdales hit the market.

    Matisse 3800X was a great chip, but more than the 8 Cores was the same latency issues.

    Looking forward to the Cezanne and maybe a Vermeer as I don’t need killer graphics, 2D is fine. Actually AST ASpeed 2500 Server chips on Supermicro and ASRock workstation/server boards is fine.

    What I seek is the single core performance crown. Intel i7 4790k’s are still in my racks. To make me jump to new builds is a larger cache from Intel. Tiger Lake at 50watts looks great for my needs. 4.4GHz is as good as it gets. But even my ancient i7 5775C using a discrete GFX card, using the 128MB L4 cache for audio (running at 3.3GHz) was on par with the 4GHz 4790k.

    So for me the CPU/IPC gains are appreciated, but cache and CPU running at 4+ GHz are really beneficial.

    Tiger Lake or Cezanne will finally show me the results I need to upgrade.
    Intel and AMD can add all of the Cores they want. Single core performance or larger cache to overcome the latency of additional Cores will mean I can run more high end Filters to shape my sounds with.
  • dromoxen - Saturday, September 19, 2020 - link

    For me these are still too weak GFx despite 2x .. When they can match or better my gtx960 they might have a customer. Otherwise I''ll stick to a downclocked ryzen 4000+gtx960 ..I want low low heat output , and a single APU would be ideal ..ASROCK deskmini styleee .
  • Gondalf - Friday, September 18, 2020 - link

    Not only but the claimed 10/15% IPC boost of Zen 3 will be barely enough to be near with Intel clock to clock. Still Intel process clock clearly better, so the upcoming 8 cores Tiger Lake will be an easy winner over an eight core Zen 3.
    To be noticed, in productive benches Intel core destroy badly Zen. Likely the cache structure is done to perform great on standard laptop SW.
    As usual a core have to be SW optimized, definitively not synthetic benches optimized. More or less the reason Xeon is right now a big winner on Epyc in the 32 cores/cpu market (the larger).
  • Spunjji - Friday, September 18, 2020 - link

    You say this every time a new AMD processor is due, and every time you're wrong, and the next time you say the same damned things again. 😑
  • close - Friday, September 18, 2020 - link

    Subjective opinion time. Ian & Andrei, leaving aside individual scores (great ST performance for Intel, great MT performance for AMD), which one would you buy for day to day "regular" work?

    I've read opinions like "I made it painfully clear that the top-of-the-line Intel CPU at its highest cTDP was only going up against a mid-grade Ryzen" and there's still room for personal opinion here. Should you have to buy one now "money no issue" and ignoring specialized fields (like AI stuff where AVX-512 makes sense) which would you put your money on?
  • Spunjji - Friday, September 18, 2020 - link

    I look forwards to the day when this Intel shill troll gets banned.
  • melgross - Sunday, September 20, 2020 - link

    Oh, come on. We have both AMD and Intel trolls here. They cancel out.
  • Spunjji - Sunday, September 20, 2020 - link

    There's being a fanboy, then there's creating an entire alter ego as some desperate attempt at satire that is inherently self-satirizing of the person running the account. I find this one deeply tiresome.
  • deil - Thursday, September 17, 2020 - link

    not rip as intel did respond already few times taking their 5% lead back against ryzen stack.
    Remember this chip will fight against zen3, which should be ~20% gains on AMD side.
    This would be a great chip a year ago it would obliterate 3000 mobile on all fronts BUT against 4800u it seems like a strong contender, but it does not dethrone 4800u as best mobile chip, as you compare 15W with 28W here. This wins in thick bois, while AMD still is unrivaled for thin and light laptops.
  • FreckledTrout - Thursday, September 17, 2020 - link

    Lets not go overboard there buddy. You have TGL in laptops beating AMD's almost 2 year old architectures since they run a little over a year behind using the prior generation architecture in the case of the GPU over 2 years old. When AMD moves to using current architectures in APU's I think things will be pretty darn close CPU side and AMD should win hands down with RDNA2.
  • senttoschool - Thursday, September 17, 2020 - link

    Zen3 on mobile is probably at least 9 months away. So TGL is competing against Renoir.
  • AMDSuperFan - Thursday, September 17, 2020 - link

    Fortunately we have Big Navi to help us out. I am looking forward to putting Intel back in their shoes with Big Navi.
  • Showtime - Friday, September 18, 2020 - link

    Who is "us" lol. Please go back to back AyMD reddit. We don't condone fanboism here.
  • San Pedro - Friday, September 18, 2020 - link

    I'm wondering if AMD is trying to push this forward.
    For now it seems like consumers can choose TGL or Renoir based on their use scenario.
  • AMDSuperFan - Monday, September 21, 2020 - link

    Would it not be glorious for Zen 3 to come in at 5 watts with 50% performance as we can expect? 20% isn't so much but 50% would really change things.
  • TheinsanegamerN - Thursday, September 17, 2020 - link

    I wouldnt go that far. GPU wise Intel needs way more power to compete with the 15W reinor. Not to mention any laptop with sufficient thermal headroom can use thirde party software to raise TDP for ryzen 4000 mobile, gaining 15-20% performance in games.

    Other benchmarks go back and forth. On the surface intel might have a decent chip, but OEM implementation may not have the same performance.
  • Spunjji - Friday, September 18, 2020 - link

    You hit the nail on the head here - it's going to be *highly* dependent on how OEMs implement it. Still, good to see they finally sorted their process out - the efficiency of this is markedly improved, it's basically what I expected from Ice Lake in the first place.
  • AnarchoPrimitiv - Saturday, September 26, 2020 - link

    Maybe for literally 3 more weeks until Zen3 comes out, then it's just more embarrassment for Intel added to years of embarrassment... Being beaten by a company with less than a tenth of the resources, there's literally no excuse for it
  • Spunjji - Thursday, September 17, 2020 - link

    Came here to leave an identical comment before I've even read the article 😂
  • DigitalFreak - Thursday, September 17, 2020 - link

    The Tiger King puns are getting old.
  • huangcjz - Thursday, September 17, 2020 - link

    I still don't get it...
  • Luminar - Thursday, September 17, 2020 - link

    RIP AMD
  • tipoo - Friday, September 18, 2020 - link

    You haven't had the...Well I can't say pleasure, of watching Tiger King then
  • Flunk - Thursday, September 17, 2020 - link

    Wow, this naming scheme is even worse than the previous one. I've been patiently explaining to people for years that the number after the I is less important than that last letter.

    E.G. H > U > Y

    I can't even imagine how you'd explain this to someone who isn't a hardcore enthusiat. You basicallly need to look up each CPU number to know where in the stack it is. Might as well give up on the numbers entirely.
  • wr3zzz - Thursday, September 17, 2020 - link

    I am with you but it sounds like the 85 in 1185G7 is the new U.
  • ingwe - Thursday, September 17, 2020 - link

    Agree with Ian and Andrei. The power/naming shenanigans are just miserable.
  • Spunjji - Thursday, September 17, 2020 - link

    Intel's product naming division is its own circle of hell.
  • CajunArson - Thursday, September 17, 2020 - link

    You guys really REALLY need to update NAMD to the 2.14 nightly builds to get a real idea of what Willow Cove can do in a workload that is very heavily used in HPC: https://www.hpcwire.com/2020/08/12/intel-speeds-na...
  • IanCutress - Thursday, September 17, 2020 - link

    2.14 was NOT AVAILABLE as a mainline version when the test was built. It was recommended for stability that we used the 2.13 stable. REALLY
  • Luminar - Thursday, September 17, 2020 - link

    Chill brah
  • Spunjji - Friday, September 18, 2020 - link

    Are you incapable of making a useful post, or do you just choose not to?
  • Luminar - Saturday, September 19, 2020 - link

    https://www.anandtech.com/comments/16069/samsung-v...
  • Spunjji - Saturday, September 19, 2020 - link

    So you're choosing not to. Roger that.
  • Luminar - Saturday, September 19, 2020 - link

    Chill brah
  • HyperText - Thursday, September 17, 2020 - link

    Chill brah
  • Luminar - Saturday, September 19, 2020 - link

    Chill brah
  • Meteor2 - Thursday, October 15, 2020 - link

    Well I'm going to thank you for sharing a very interesting article.

    Reading that, seeing the AVX-512 results in the review, and reading about Larrabee (whose legacy is AVX-512), all underlines what a powerful addition AVX-512 is to x86.

    Even if it does need Intel engineers to code, just adding it to NAMD and Gromacs is huge.
  • shabby - Thursday, September 17, 2020 - link

    Intel: let's run our mobile cpu at 50watts, then we'll beat amd!
    What about battery life?
    Who cares!
  • Drumsticks - Thursday, September 17, 2020 - link

    This comment seems disingenuous. In the power consumption article, even AMD is boosting up to nearly 40W. It looks like Tiger Lake will be more power efficient than Renoir in lightly threaded workloads, and Renoir would be more efficient in heavily threaded ones that can use the entire SoC.
  • Spunjji - Thursday, September 17, 2020 - link

    Renoir boosts up to about 35W across 8 cores. Tiger Lake boosts to 50W across 4. That's a 42% difference. Even if Renoir actually hit 40W, that'd still be a 25% increase in power draw while boosting.
  • JayNor - Thursday, September 17, 2020 - link

    I usually have my laptop plugged in... don't really care how long the battery lasts then. Seems like the ability to choose higher performance is a nice feature.
  • ikjadoon - Thursday, September 17, 2020 - link

    Did we look at the same charts? Area under the curve, my friend. In extreme usages for thin-and-light laptops,

    15 W Renoir: 2842 seconds for 62660 joules
    15 W Ice Lake: 4733 seconds for 82344 joules
    15 W Tiger Lake: 4311 seconds for 64854 joules

    If we're looking at multi-threaded power consumption, Renoir & TGL should be close with a small lead for Renoir.

    Instantaneous power draw is higher for Tiger Lake, but that 43 W is for mere seconds and not indicative of actually how high it boosts for the entire period.
  • Spunjji - Friday, September 18, 2020 - link

    We did, I just hadn't had time to take the numbers in fully - and you're absolutely right.
  • RedOnlyFan - Friday, September 18, 2020 - link

    Lol 1st read the article before commenting. Take your fanboy stuff to wccftech you will fit in perfectly.
  • Spunjji - Friday, September 18, 2020 - link

    Whatever you say, buddy 🤷
  • Alistair - Thursday, September 17, 2020 - link

    So it is just Ice Lake again without any major improvements outside the integrated GPU people don't care about. Get double the cores for less money with Renoir.
  • Alistair - Thursday, September 17, 2020 - link

    This isn't 2019 anymore... we went from 4 to 8 cores for the same price and each core is +20 percent with AMD in the last 6 months.
  • Spunjji - Thursday, September 17, 2020 - link

    Bingo.
  • JayNor - Thursday, September 17, 2020 - link

    vs Ice Lake the new TGL architecture doubles the bandwidth of the ring bus, adds pcie4, lpddr5, thunderbolt4 and a much superior GPU ... When will AMD catch up? They still need to add avx512, dlboost, integrate wifi6 and now they are further behind. Or do they just add two more cores and declare it even since they can win at cinebench?
  • RSAUser - Thursday, September 17, 2020 - link

    PCIe 4 will be next gen, and for current it doesn't really matter, pretty much no consumer SSD that can max it, and GPU is questionable.

    LPDDR5 should be 2022 with their next gen, which also includes PCIe5 by then.

    AV512 doesn't matter, not something you run on a laptop, DLBoost is Intel trademarked, there are other ML libraries that AMD uses, and you're not really running ML training on a laptop CPU, you'd use the GPU.

    The ring bus piece is an architecture difference, not sure why you're mentioning it? AMD's CCX design is better, Intel will be moving in that direction.

    In regards to integrated WiFi 6/802.11ax, that's a separate module added to the mobo, AMD is not a communication tech company.
  • JayNor - Saturday, September 19, 2020 - link

    Intel integrates the Wifi6 high speed digital components into the PCH chiplet in the same package with the cores on the laptop chips.

    They build a separate wifi6 chip that OEMs can use with the AMD chips.
  • JayNor - Saturday, September 19, 2020 - link

    Intel's ring bus wins...

    "We can also see that, even in the 15W configuration, Tiger Lake's dual ring bus delivers slightly more throughput than the 4800U's Infinity Fabric, and has 30% more throughput at 28W with dynamic tuning."

    https://www.tomshardware.com/features/intel-11th-g...
  • Rudde - Saturday, September 19, 2020 - link

    You mean Intel caught up with AMD? Intel had a little over half the throughput of Renoir, when Renoir came out. Now Intel has caught up with AMD with Tiger Lake. AMD will likely pull ahead with Cezanne, continuing the back and forth.
  • Spunjji - Saturday, September 19, 2020 - link

    Wow, parity at 15W and a win at nearly twice the TDP. Such wins.

    Seriously, why do you need to pathologically overstate their achievements?
  • MetaCube - Friday, October 23, 2020 - link

    "LPDDR5 should be 2022 with their next gen, which also includes PCIe5 by then." lmao
  • TheinsanegamerN - Thursday, September 17, 2020 - link

    Maybe you dont care about having a better iGPU, but clearly its a selling point.
  • huangcjz - Thursday, September 17, 2020 - link

    I care about the integrated GPU. I only buy MacBooks, so AMD isn't a choice. I can't afford £2,400 for the 16" MacBook Pro with discrete graphics.
  • playtech1 - Friday, September 18, 2020 - link

    I've got some bad news for you... chances of Apple releasing a Tiger Lake MacBook looks very very slim
  • tipoo - Friday, September 18, 2020 - link

    Sounds like their next Macbook releases are going to be Apple Silicon, not sure we'll ever see a TGL Apple system.
  • AMDSuperFan - Thursday, September 17, 2020 - link

    What worries me the most is that this Tiger is better than Renoir in every way possible. I feel like Intel is the Apple of laptops now and our AMD are some knockoff tablet with good specs but not up to snuff. This 4 core beating the 8 core Renoir is terrible. I know we have Big Navi coming and that should save us here, but right now the Nvidia and Intel products are really bad for us fans.
  • Spunjji - Friday, September 18, 2020 - link

    I worry about the mental health of the person running this account.
  • eddman - Thursday, September 17, 2020 - link

    Why intel didn't do 6-8 core low power models again? 10nm too power hungry? Low yields and/or low manufacturing capacity?
  • Spunjji - Thursday, September 17, 2020 - link

    Yes!

    But seriously, all of the above.
  • eek2121 - Thursday, September 17, 2020 - link

    Fab capacity.
  • RedOnlyFan - Friday, September 18, 2020 - link

    Hahaha. Fake information
  • Spunjji - Friday, September 18, 2020 - link

    What's your explanation then, Red? "They didn't want to"?

    They compete well with AMD at 15W but need 28W to get full performance from the design. Squeezing twice as many cores in would push them way, way off the bottom of their efficiency curve. They're running more complex cores than AMD and they require more power, no way around that.

    If yields were good enough they'd have had 8-core Ice Lake designs out taking the fight back to AMD on the desktop, but mysteriously they skipped those and rehashed Skylake again. It's almost like something was holding them back...
  • JayNor - Thursday, September 17, 2020 - link

    Intel chose to integrate high performance wifi6, thunderbolt 4, avx512, dlboost, pcie4 features rather than the more small hammers approach.

    Alder Lake will have even smaller and lower power cores than AMD's, so perhaps next year the choice for Cinebench processing will get funny.
  • RSAUser - Thursday, September 17, 2020 - link

    You mentioned this again, so I'll comment again:

    WiFi 6/802.11ax: AMD does not do networking equipment, it's also not part of the CPU, it's an
    extra module attached to the mobo.

    PCIe 4: No benefit in laptops, there's no SSD that can really max it out consumer side and GPU wise. PCIe 4 consumes a lot more power than 3rd gen.

    Thunderbolt 4: You actually mean USB 4.

    AVX512: Not many things actually use this, a majority of those use-cases can just go GPU, and you're not really running an AVX512 workload on a laptop.

    DLBoost: Intel's ML library, you're not training ML libraries on a laptop CPU, you'd near always want to use a GPU instead, plus that specific one is Intel's trademark one, you'd use open source alternatives.

    AMDs' leaked roadmaps are USB 4 and PCIe 4 in 2022, and here you didn't mention LPDDR5, which is also included in that release.
  • ikjadoon - Thursday, September 17, 2020 - link

    You wrote this twice without any references, but I'll just write this once:

    AMD is literally moving to custom Wi-Fi 6 modems w/ Mediatek (e.g., like ASMedia and AMD chipsets): https://www.tomshardware.com/news/report-amd-taps-...

    PCIe4: it doesn't need to 'max out' a protocol to be beneficial and likewise allows fewer lanes for the same bandwidth (i.e., PCIe Gen4 also powers the DMI interface now, no?).

    Thunderbolt 4 is genuinely an improvement over USB4. Anandtech wrote an entire article about TB4: https://www.anandtech.com/show/15902/intel-thunder... (mandates unlike USB4, 40 Gbps, DMA protection, wake-up by dock, charging, daisychaining, etc). Anybody who's bought a laptop in the past two years know that "USB type-C" is about as informative as "My computer runs an operating system."

    AVX512 / DLboost: fair, nobody cares on a thin-and-light laptop.

    LPDDR5 is likely coming in 2021 to a Tiger Lake refresh around CES. Open game how many OEMs will wait; noting very few of the 100s of laptop design wins have been released, I suspect many top-tier notebooks will wait.
  • Billy Tallis - Thursday, September 17, 2020 - link

    I'd be surprised if the chipset is using gen4 speeds for the DMI or whatever they call it in mobile configurations. The PCIe lanes downstream of the chipset are all still gen3 speed, so there's not much demand for increased IO bandwidth. And last time, Intel took a very long time to upgrade their chipsets and DMI after their CPUs started offering faster PCIe on the direct attached lanes.
  • JayNor - Saturday, September 19, 2020 - link

    4 lanes of pcie4 are on the cpu chiplet, as are the thunderbolt io. They can be used for GPU or SSD.
  • Billy Tallis - Saturday, September 19, 2020 - link

    Did you mean to reply to a different comment?
  • RedOnlyFan - Friday, September 18, 2020 - link

    Lol this is so uneducated comment. Telling wrong stuff twice doesn't make it correct.

    Pcie4 implemented properly should consume less power than pcie3.
    Thunderbolt 4 is not USB 4. Only tb3 was open sourced to USB 4 so USB 4 will be a subset for tb3 thank Intel for that.

    There are more AI/ML used in the background than you realize. If you expect people to do highly multi threaded rendering stuff.. Why not expect AI/ML stuff?

    And 2022 is still 1.5 year away. So amd is entering the party after its over.
  • JayNor - Saturday, September 19, 2020 - link

    Thunderbolt 4 doubles the pcie speed vs Thunderbolt 3 that was donated for USB. Intel has also now donated the Thunderbolt 4 spec.
  • Spunjji - Friday, September 18, 2020 - link

    They have 4 (four) lanes of PCIe 4.0 - that provides the same bandwidth as Renoir's 8 lanes of 3.0

    I get that you're one of those posters who just repeats a list of features that Intel has and AMD doesn't in order to declare a "win", but seriously, at least pick one that provides a benefit.
  • JayNor - Saturday, September 19, 2020 - link

    The m.2 pcie4 chips use 4 lanes. Seems like a good combo with Tiger Lake. AMD would need to use up 8 lanes to match it with their current laptop chips.
  • Rudde - Saturday, September 19, 2020 - link

    Problem is that there isn't any reasonable mobile pcie4 SSDs yet. Same problem with lpddr5. Tiger Lake will get them when they become available. Renoir was released half a year ago; all AMD based laptops will wait for next gen before adopting these technologies anyway.

    If you want to argue that AMD is behind, highlight what Ice Lake has, but Renoir doesn't have.
  • Spunjji - Saturday, September 19, 2020 - link

    Why would they bother? There are no performance benefits to using a PCIe 4 SSD in the kinds of systems TGL will go into. You can't get data off it fast enough for the read speed to matter, and it has no effect on any of the applications anyone is likely to use on a laptop that has no GPU. This is aside from Rudde's point about there currently being no products that suit this use case.
  • JfromImaginstuff - Friday, September 18, 2020 - link

    Intel is planning to release a 8 core 16 thread SKU, confirmed by one of their management can't remember his name but when that'll reach the market is a question mark
  • RedOnlyFan - Friday, September 18, 2020 - link

    With the space and power constraints you can choose to pack more cores or other features that are also very important.
    So Intel chose to add 4c + the best igpu + AI + neural engine + thunderbolt + Wi-Fi 6 + pcie4.
    Amd chose 8cores and a decent igpu.
    So we have to choose between raw power and more useful package.

    For a normal everyday use an all round performance is more important. There are millions who don't even know what cinebench is for.
  • Spunjji - Friday, September 18, 2020 - link

    Weird that you're calling it "the best iGPU" when the benchmarks show that it's pretty much equivalent to Vega 8 in most tests at 15W with LPDDR4X, which is how it's going to be in most notebooks.

    Funny also that you're proclaiming PCIe 4 to be a "useful feature" when the only thing out there that will use it in current notebooks is the MX450, which obviates that iGPU.

    I could go on but really, Thunderbolt is the only one I'd say is a reasonable argument. A bunch of AMD laptops already have Wi-Fi 6
  • JayNor - Saturday, September 19, 2020 - link

    but Intel has lpddr5 support built in. Raising memory data rate by around 25% is something that should show up broadly as more performance in the benchmarks.

    Intel's Tiger Lake Blueprint Session benchmarks were run with lpddr4x, btw, so expect better performance when lpddr5 laptops become available.

    https://edc.intel.com/content/www/us/en/products/p...
  • Spunjji - Saturday, September 19, 2020 - link

    I understand and agree. My point was, what does "support" matter if it's not actually useable in the product? This will be an advantage when devices with it release. Right now, it's irrelevant.
  • abufrejoval - Friday, September 18, 2020 - link

    I'd say going for the biggest volume market (first).

    Adding cores costs silicon real-estate and profit per wafer and the bulk of the laptop market evidently doesn't want to pay double for eight cores at 15 Watts.

    Being a fab, Intel doesn't seem to mind doing lots of chip variants, for AMD it seems to make more sense to go for volume and fewer variants. The AMD 8 core APU covers a lot of desktop area, but also laptops, where Intel just does distinct 8 core chip.

    Intel might even do distinct iGPU variants at higher CPU cores (not just via binning), because the cost per SoC layout is calculated differently.... at least as long as they can keep up the volumes.

    I'm pretty sure they had a lot of smart guys run the numbers, doesn't mean things might not turn out differently.
  • Drumsticks - Thursday, September 17, 2020 - link

    Regarding:

    Compromises that had been made when increasing the cache by this great of an amount is in the associativity, which now increases from 8-way to a 20-way, which likely increases conflict misses for the structure.

    On the L3 side, there’s also been a change in the microarchitecture as the cache slice size per core now increases from 2MB to 3MB, totalling to 12MB for a 4-core Tiger Lake design. Here Intel was actually able to reduce the associativity from 16-way to 12-way, likely improving cache line conflict misses and improving access parallelism.

    ---

    Doesn't increasing cache associativity *decrease* conflict misses? Your maximum number of conflict misses would be a direct mapped cache, where everything can go into only one place, and your minimum number of conflict misses would be a fully associative cache, where everything can go everywhere.

    Also, isn't it weird that latency increases with the reduced associativity of the new L3? I guess the fact that it's 50% larger could have a larger impact, but I'd have thought reducing associativity should improve latency and vice versa, even if only slightly.
  • Drumsticks - Thursday, September 17, 2020 - link

    Later on, there is:

    The L2 seemingly has gone up from 13 cycles to 14 cycles in Willow Cove, which isn’t all that bad considering it is now 2.5x larger, even though its associativity has gone down.

    ---

    But in the table, associativity is listed as going from 8 way to 20 way. Is something mixed up in the table?
  • AMDSuperFan - Thursday, September 17, 2020 - link

    How does this compare with Big Navi? It seems that Big Navi will be much faster than this right?
  • Spunjji - Friday, September 18, 2020 - link

    🤡
  • JayNor - Thursday, September 17, 2020 - link

    I noted from Intel's Thunderbolt 3 documents that the ports are bidirectional and can, for example, support pcie send while receiving display port on the same cable.

    Is it possible, for example, to use an external GPU card with one cable to display output on your laptop's display?
  • Spunjji - Friday, September 18, 2020 - link

    I understand that you can, but it drops performance significantly.
  • Spunjji - Thursday, September 17, 2020 - link

    Oof, those GPU benchmarks are painful. It's become clear that their predictions were all based on LPDDR5
  • PeachNCream - Thursday, September 17, 2020 - link

    Yes the GPU results are somewhat disappointing, but there is only so much you can do when sharing bandwidth to RAM with the CPU cores and everything else. Of course, there is also significant latency to contend with when you don't have GDDR5/6 available to the graphics processor.
  • Spunjji - Friday, September 18, 2020 - link

    Absolutely - it's become pretty clear that's why AMD decided to go with 8 CUs on Renoir. I wasn't expecting anything huge from Xe on TGL, but Intel were pushing it as a big win and really it's just not - at least, not in this form. A ~20% bandwidth boost might well translate into big gains on later devices.
  • JayNor - Saturday, September 19, 2020 - link

    false, they used lpddr4x for the benchmarks.
  • Spunjji - Saturday, September 19, 2020 - link

    I think perhaps you misread what I said. Intel were previously showing numbers suggesting a huge leap in performance for their Xe iGPU which isn't borne out by the testing done here. I'm suggesting that it's because Intel were quoting numbers based on an LPDDR5 implementation. They could have just been lying, though. You seem to be suggesting the latter?
  • undervolted_dc - Thursday, September 17, 2020 - link

    Ok but why there are no power consumption of the ryzen ? I see a 28w tiger lake which have a peak of 50+ and an average of 35 and another "28w" tiger lake which instaed have a peak of 50+ and an average of 38w..,
    is the 4800u in thermal throttling because the tiger lake is better cooled ? this "reference" and the lack of real power usage comparision is "strange" to me... we started from the intel benchmarks.. and now we hare here ... where will be in 1 more months when the tiger lake will be in stores with also it's prices.. will be still comparable to the 4800u? ( and I'm also sure that asus will not cover air intake for the tiger lake one to be able to sell them against the 4800u given the higher prices they have to ask .. )
    and where we will be then when zen3/cezanne will be revealed ? you will see the zen3 ipc and freq gain in 1 month.. ( maybe even before tiger lake approach stores ) .. I see no Baskin here..
  • undervolted_dc - Friday, September 18, 2020 - link

    Also ram is unfair for this comparison:
    intel LPDDR4X-4266
    vs amd DDR4-3200

    but again.. at the end of the day the only thing that matter is the price/performance/power-usage balance for laptop, and here I see only performance comparison..(in unknown thermal/power condition for tests) and with no words about price..

    we see here a coming-soon quad core intel with a power usage higher than 1 year old 8 core amd which is also faster in full speed tests ( not in gpu , but it's the old navi chip )

    yes, single core bench the intel win.. but their high freq single core are 20% higher freq than AMD ones, are 20% faster, and eat 50% more power , are ~1 year newer , and probably will costs 50% more.. so who is the real winner?
  • undervolted_dc - Friday, September 18, 2020 - link

    a mere 4500u with lpddr4x show an average +17% improvements in benchmarks..

    https://optocrypto.com/amd-ryzen-4000-adding-suppo...
  • Spunjji - Friday, September 18, 2020 - link

    Really not sure what you're chatting about here - the Renoir system on test uses LPDDR4X at the same frequency as the TGL system.

    The only really relevant query would be over the thermal design of the system, but even then, the power consumption charts (did you check them?) show that the TGL system sticks to its limits in 15W mode.

    Basically it feels like you're scrabbling for complaints that aren't really justified.
  • proflogic - Thursday, September 17, 2020 - link

    *yawn* Looks like I'll skip another generation. It looks like PCIE ACS started getting into root ports with ICL, so I was hoping for more impressive performance gains with TGL (probably shouldn't have expected much with still having 4 cores). For my intended workloads, this generation isn't worth the investment. YMMV.

    I'm probably waiting for whenever AMD gets USB4 into their platform.
  • ksec - Thursday, September 17, 2020 - link

    Those GPU benchmarks looks very strange. And some are missing the AMD 15W variants others are missing Intels.....

    So the graph looks very unbalanced.
  • IanCutress - Thursday, September 17, 2020 - link

    Limited time to test. I set the gaming benchmark script to run overnight, and until I actually look at the output data, I won't know if a particular test has failed or not run properly. Then Intel asked for the system back on Sunday.

    I technically went on holiday from Monday (booked ages ago), had to take the Lenovo AMD laptop with me to test in the hotel room, and lost two days of holiday to writing up the review. Turns out in the middle of nowhere you can't really download borderlands 3 in a reasonable time scale.
  • ksec - Friday, September 18, 2020 - link

    Thanks Ian on the hard work.I dont know how the publisher or other people thinks. Personally I wish this would be first thing spelled out in the article. It also felt like Intel is rushing things a little bit this time.
  • asfletch - Friday, September 18, 2020 - link

    Wow you can still travel? Not sure whether envious...(Australian here, locked down to 5km radius).
  • Spunjji - Friday, September 18, 2020 - link

    Damn! That's dedication, but I sincerely hope you get some proper time out soon to make up for it. 👍
  • MCPicoli - Thursday, September 17, 2020 - link

    Locking security fixes behind "premium" versions? How about NO?
  • ballsystemlord - Saturday, September 19, 2020 - link

    I agree. Security only for businesses is stupid.
  • WaltC - Thursday, September 17, 2020 - link

    Wake me in six months if and when you tag a retail unit to test...;) Intel is so thoughtful to send out unfinished units for Intel-managed reviews with pre-conditions and canned scenarios to try and make it appear to equal or exceed AMD's presently shipping, finished products. Yawn...what else is new?
  • IanCutress - Thursday, September 17, 2020 - link

    Our benchmarks are all our own. We decide what is relevant to test. From a microarchitecture standpoint, as long as the system doesn't fail and our tests run, it's good. We'd rather have an opportunity to test reference designs ahead of launch as a base comparison point rather than not at all. I actively encourage AMD, Intel, Qualcomm etc to do this.
  • Spunjji - Friday, September 18, 2020 - link

    This is a really rude and off-base comment. They're been totally up-front about the limitations of the review hardware and the tests are the same tests they always do.
  • SystemsBuilder - Thursday, September 17, 2020 - link

    Ian,

    Thank you for a deep, insightful and well written review!
    Despite Intel's best efforts to to confuse us (biggest job growth at Intel must be marketing with the possible exception of legal), you make it easier to get past the marketing BS and get to the core truths.
    This is why I come here.
  • surt - Thursday, September 17, 2020 - link

    Apologies for not knowing where to ask this, but why are we getting cpu reviews when other sites have had their 3080 benchmarks up for more than 48 hours now. Did those sites break embargo, or did Anandtech not get a review part .... what's going on?
  • IanCutress - Thursday, September 17, 2020 - link

    Follow us on twitter. Ryan is currently dealing with West Coast fires and a delayed test bed.
  • surt - Thursday, September 17, 2020 - link

    Ah thanks for the answer. I live in CA so yeah ... the fires are out of this world this year. Go team 2020!
  • IanCutress - Thursday, September 17, 2020 - link

    Also, this was an embargoed launch :) I cover CPUs, Ryan does GPUs, and we're on opposite ends of the world.
  • shabby - Friday, September 18, 2020 - link

    You're in Florida? 😂
  • PeachNCream - Friday, September 18, 2020 - link

    In this world, there is only Florida. Nothing else aside from the bug-infested swampland of theme parks matters so it may as well simply not exist at all.
  • DannyH246 - Thursday, September 17, 2020 - link

    Hahaha Another marketing presentation from www.inteltech.com. Wake me up when you have actual hardware.
  • IanCutress - Thursday, September 17, 2020 - link

    ... Did you actually read the review.
  • DannyH246 - Thursday, September 17, 2020 - link

    Ian - as you are fully aware its a reference unit supplied by Intel with various restrictions applied I.e no battery tests allowed.

    As you are also fully aware battery Performance is super important when it comes to this form factor of device.

    As such this ‘review’ serves no purpose apart from to try and influence people buying laptops to hold off buying AMD and wait for ‘Intels new super chip‘. I.e the same message they have been pumping out the last 2years+. Look what we’ve got coming blah blah blah.

    So as I said - wake me up when there’s actual hardware available and we can have a proper test.
  • PeachNCream - Thursday, September 17, 2020 - link

    I want to see retail products as well, but we at least get an understanding of processor and graphics performance. Battery life will vary greatly from one laptop to another anyway given configurable TDP, screen resolution and size, battery capacity, and a bunch of other factors. In addition to that, the limitations in testing were disclosed at the beginning of the article so readers were advised had they any sort of reading comprehension.

    While there are some fair complaints to make about AT (where's the edit button?!), I don't think there is any sort of bias influencing the results of this article.
  • Spunjji - Friday, September 18, 2020 - link

    Some people just want things to complain about 🤷
  • IanCutress - Thursday, September 17, 2020 - link

    You're coming at it from the product point of view. We're coming at it from a semiconductor point of view. That's why we have details about the core, the cache, and raw performance on standardised metrics. Performance is one piece of the puzzle, I agree, for end products. But getting a chance to test one example of performance 6+ weeks before retail availability is something I've been pushing Intel and AMD to offer for years. Qualcomm already does with their Snapdragon reference designs. Intel and AMD are slowly getting on board.
  • PixyMisa - Thursday, September 17, 2020 - link

    The standout point of Tiger Lake is single-threaded performance, and that's unlikely to change much on production hardware. From that perspective this preview is great.
  • Spunjji - Friday, September 18, 2020 - link

    If you can't draw some obvious conclusions about likely battery life from the power charts, then maybe this isn't the site for you?

    The eventual products will all have different battery lifr based on individual implementations anyway, so testing a single unit and trying to extrapolate to others really wouldn't get you very far. This is a solid preview that sets us up to at least expect better efficiency from TGL.
  • GeoffreyA - Thursday, September 17, 2020 - link

    Surprised, but quite impressed with Tiger Lake. Good job, Intel.
  • bernstein - Thursday, September 17, 2020 - link

    a bit confused about the conclusion...

    looking at the tests i mostly see amd's 4880u being much faster than intel's 15W part... and for current task also a bit better efficiency wise (due to being faster). so it seems the 4800u is the better part.
  • Spunjji - Friday, September 18, 2020 - link

    Depends on your workload, basically. Same goes for GPU performance. Seems like a genuinely competitive situation for the first time in a long time!
  • isthisavailable - Thursday, September 17, 2020 - link

    You should really move to some newer, more games.
  • Roy2002 - Thursday, September 17, 2020 - link

    Do you game a lot with a ultra thin laptop?
  • isthisavailable - Thursday, September 17, 2020 - link

    That's the goal, yeah. I definitely would if I could.
  • Spunjji - Friday, September 18, 2020 - link

    Must admit that I too would appreciate this, but I feel for Ian with the number of tests he's doing already.
  • yeeeeman - Thursday, September 17, 2020 - link

    The big positive from this review is the very promising showing of Xe architecture. 10nm looks OK now, but even if 10nm is sorted now, Intel is one node behind TSMC, so...they still have a lot of work to do. As for the cores, they need fatter cores with smaller cores together. Oh, that is what they will do with Alder Lake. Great.
  • Oxford Guy - Thursday, September 17, 2020 - link

    So, Intel used a giant refrigerator to showcase a part some time back and people justly ridiculed that. And yet, here we are with a mysterious clean and dagger laptop protected by asinine opacity conditions.

    Maybe it’s time for the tech press to stop enabling Intel’s shenanigans?
  • Oxford Guy - Thursday, September 17, 2020 - link

    Cloak and dagger. Apple’s autodefect didn’t like the phrase apparently.
  • Spunjji - Friday, September 18, 2020 - link

    It's not like we can buy anything based off this info yet anyway. As a tech fan I like to get an early idea, but I wouldn't be buying a product 'til I see that actual product reviewed.
  • Oxford Guy - Sunday, September 20, 2020 - link

    Vaporware doesn't excite me.
  • shoestring - Thursday, September 17, 2020 - link

    On the first page of the article, "The system we have to hand is one of..." should read "The system we have IN hand is one of..."? And "To complicate the issue, Intel by definition is only publically offering..." should be "...publicly offering..."
    Signed,
    Your cloud-based, crowd-sourced editorial staff
  • huangcjz - Thursday, September 17, 2020 - link

    No, you can say "have to hand" as in something which is available. E.g. "Do you have the presentation to hand?"
  • 29a - Thursday, September 17, 2020 - link

    Wouldn't a non Iris chip be a fairer comparison to Renoir?
  • Kamen Rider Blade - Thursday, September 17, 2020 - link

    AMD's 4800U has a 25 watt mode, Hardware UnBoxed tested it against Intel.

    Why didn't you test it and put those results in the chart?

    Why this biased reviewing of one side gets 15 watt and 28 watt scores.

    Yet AMD isn't allowed to show 25 watt scores?

    What are you afraid of when comparing like for like?
  • IanCutress - Thursday, September 17, 2020 - link

    For us, the 15W to 15W results were the focal point. 28W is there to show a max Intel and look at scaling. Also, The amount of 4800U devices at 25W is minimal.

    Not only that, I'm on holiday. I had to spend two days out, while in this lovely cottage in the countryside, to write 18k words, rather than spend time with my family. I had 4 days with the TGL laptop, and 8 days notice in advance to prepare before the deadline. Just me with a couple of pages from Andrei, no-one else. Still posted the review 30 minutes late, while writing it in a pub as my family had lunch. Had to take the amd laptop with me to test, and it turns out downloading Borderlands 3 in the middle of nowhere is a bad idea.

    Not only that, I've been finishing up other projects last week. I do what I can in the time I have. This review is 21k words and more detailed than anything else out there done by a single person currently in the middle of a vacation. If you have further complains, our publisher's link is at the bottom of the webpage. Or roll your own. What are you afraid of? I stand by my results and my work ethic.
  • PixyMisa - Thursday, September 17, 2020 - link

    I really appreciate the effort. The individual SPEC results are vastly more useful than (for example) a single Geekbench score.
  • Spunjji - Friday, September 18, 2020 - link

    I can second that - I appreciate seeing a breakdown of the strengths/weaknesses of each core design.
  • Kamen Rider Blade - Friday, September 18, 2020 - link

    We appreaciate your hard work, I do watch your YT channel Tech Tech Potato. That being said, if you knew about this issue, with not comparing like for like; then just omit the 28 W scores from the Intel machine and just focus on Intel's 15W vs AMD's 15W.

    Why even include the 28W on the chart? You know how this makes you and Anandtech look, right? The issues of bias towards or against any entity could've been easily avoided if you had "Like for like" scores across the board. That's part of what Steve from Gamers Nexus and many of us enthusiast see's as "Bias Marketing" or "Paid Shilling" to manipulate results in one way or another. Many people can easily interpret your data of not showing "like for like" in many wrong ways when they have no context for it.

    If you didn't want to test AMD's 25 watt scores, nobody would care, just don't bring up Intel's equivalent 28 watt scores. Alot of the more casual readers won't look at the details and they can easily mis-interpret things. I prefer that your good name doesn't get dragged down in mud with a simple omission of certain benchmark figures. I know you wouldn't deliberately do that to show bias towards one entity or another, but will other folks know that?
  • Spunjji - Friday, September 18, 2020 - link

    Presenting the figures he has isn't bias. Bias would be proclaiming Intel to be the winner without noting the discrepancy, or specifically choosing tests to play to the strength of one architecture.

    As it is, the Lenovo device doesn't do a 25W mode, so you're asking him to add a full extra device's worth of testing to an already long review. That's a bit much.

    If you take a look at the 65W APU results and compare them, you'll see a familiar story for Renoir - there's not actually a whole lot of extra gas in the tank to be exploited by a marginally higher TDP. It performs spectacularly well at 15W, and that's that.
  • Kamen Rider Blade - Friday, September 18, 2020 - link

    You can literally just omit the 65W APU, it has no relevance to be on that chart.

    Ok, if that Lenovo LapTop doesn't offer a 25W mode, fine. Maybe Hardware Unboxed got a different model of LapTop for the 4800U. Then don't present Intel's 28W mode.

    That's how people misunderstand things when there is a deliberate omission of information or extra information that the other side doesn't happen. The lack of pure like for like causes issues.
  • Spunjji - Saturday, September 19, 2020 - link

    You're *demanding bias*. They had the Intel device with a 28W mode, 28W figures are a big part of the TGL proposition, so they tested it and labelled it all appropriately. That isn't bias.

    The "lack of pure like for like" only causes issues if you don't really pay attention to what the article says about what they had and how they tested it.
  • huangcjz - Friday, September 18, 2020 - link

    I'm interested in the 28W data, having a laptop which uses the 28W Ice Lake i7-1068NG7, in order to be able to compare.
  • GeoffreyA - Friday, September 18, 2020 - link

    As always, Ian, we thank you for it and appreciate all the hard work put in. Also, this sort of detail is why people come to AnandTech. Hope you and the family enjoy the rest of the holiday. Please, forget technology for a few days!

    I enjoyed reading the review and was surprised that TL turned out to be pretty strong; but I suppose that's Sunny/Willow Cove showing its stuff at last. Admittedly, things are murky with power. On AMD's side, there are more cores at better power, an important point, and Zen 3 is on its way too, so all in all, exciting competition up ahead.
  • RedOnlyFan - Friday, September 18, 2020 - link

    Thank you Ian for your dedication and efforts. The only website I trust for everything semiconductor related.
  • DigitalFreak - Thursday, September 17, 2020 - link

    Intel is worried about AMD in a major way. Their cringe worthy "press day" for Tiger Lake was pathetic. They mentioned AMD more than their own products.
  • RedOnlyFan - Friday, September 18, 2020 - link

    Lol still better than amd running Intel vs amd on center stage and at every tech show... That's clingy for you.
    Atleast Intel has there previous gen products.. Lol dare amd to do that.
  • Spunjji - Friday, September 18, 2020 - link

    These two comments are like mirror images of each other. 😬😅
  • huangcjz - Thursday, September 17, 2020 - link

    FML, I literally just spent £2,000 on buying a 13" MacBook Pro with a 2.3 GHz - 4.1 GHz 28W Ice Lake i7-1068NG7 with Gen11 graphics a few weeks ago, and now there's a new 3.0 GHz - 4.8 GHz replacement processor, with graphics which are much faster?! This MacBook Pro model was only launched in May - that's only 4 months ago, and it's already out-dated! I thought that it'd only be up-dated after a year... The only reason I got this one instead of a MacBook Air at half the price was because I ocassionally play Civ VI on it, not enough to justify the 16” model with the discrete graphics, which I couldn’t afford. Otherwise, I’m a very light user of the computer.
  • Spunjji - Friday, September 18, 2020 - link

    First point - TGL was announced at least a couple of months back.

    Second - as far as anybody knows the next MacBooks will have Apple-designed ARM processors, so you've probably got the highest-performing Intel-based MBP13 they'll ever release. 👍
  • Meteor2 - Thursday, October 15, 2020 - link

    Slightly confused that you seem to be involved with reviewing Intel components (see comments a little way below), but didn't know about TGL?
  • Meteor2 - Thursday, October 15, 2020 - link

    Oh you were quoting Jim Salter nvm 🤦
  • MadManMark - Thursday, September 17, 2020 - link

    I don't quite get the "sardine oil basting AMD" analogy?

    Is that some English thing, can you Britsplain the meaning for this ignorant Yank?
  • MamiyaOtaru - Thursday, September 17, 2020 - link

    ironically it's actually a super American pop culture reference: like some other things in the article it's a reference to something from Tiger King
  • huangcjz - Thursday, September 17, 2020 - link

    Ars Technica has disclosed that the system was made by MSI, and that it kinda resembles what will eventually become their Prestige 14 Evo system.
  • Oxford Guy - Thursday, September 17, 2020 - link

    "Kinda resembles" doesn't mean it might not have super special cooling to show the chip in an artificially good light.

    If that level of cooling isn't going to be in the market then the results are marketing distortion.
  • Oxford Guy - Thursday, September 17, 2020 - link

    Before anyone says it's can't be special ask yourself why there are special conditions, like photographing the inside, etc. etc. The review said the cooling is overbuilt and not something for the marketplace.

    Remember Intel's overbuilt fridge that it used to sucker people?
  • Spunjji - Friday, September 18, 2020 - link

    Better cooling won't change the fact that it has a 15W power limit imposed for those particular tests, which Ian confirmed through testing - it just means that temperature won't be the limit. I really, really don't think this is a distorting factor for this comparison.

    If the OEMs put it into laptops that can't actually cool 15W, that's kind of on them. I suspect it'll happen (thought not as much as it happens with AMD designs, natch).
  • Oxford Guy - Sunday, September 20, 2020 - link

    I really think you're likely wrong.

    Lower temperature means more work for those watts.
  • Oxford Guy - Sunday, September 20, 2020 - link

    We are also familiar with Intel's "TDP" versus how much the machines actually draw.

    I don't trust any numerical claims from Intel. Verify then trust.
  • Spunjji - Sunday, September 20, 2020 - link

    They verified the numbers...

    Eh. I give up.
  • asfletch - Friday, September 18, 2020 - link

    Yeah Dave Lee recognised it as Prestige 14 shell too...wonder why the cloak+dagger...
  • huangcjz - Friday, September 18, 2020 - link

    Jim Salter, the author at Ars, replied in the comments on their article that the reason why they disclosed that it was MSI was because they specifically asked Intel to check with MSI whether they could disclose that it was made by them (because MSI might not want this to be compared to their finished products when this is a prototype), whereas other reviewers didn't explicitly ask Intel if they could do so:

    "I wonder why Anandtech felt the need to conceal the system manufacturer's name."

    "They were being respectful, since prototype recipients were asked not to take pictures of innards, not do battery tests, and a few other things due to this very much not being a production laptop.

    I would have done the same, except that I specifically asked my Intel rep whether MSI would prefer to be named or not. My rep took a day to find answers, then came back and said that naming MSI was fine as long as we made it clear that this wasn't a retail system."
  • Spunjji - Saturday, September 19, 2020 - link

    Nice! Thanks for the context.
  • Oxford Guy - Sunday, September 20, 2020 - link

    The name of the manufacturer isn't the point.
  • m53 - Friday, September 18, 2020 - link

    Intel don't want to provide free marketing to MSI which might make the other OEMs unhappy. That's why they can't say that it is an MSI system.
  • huangcjz - Friday, September 18, 2020 - link

    Jim Salter, the author at Ars, replied in the comments on their article that the reason why they disclosed that it was MSI was because they specifically asked Intel to check with MSI whether they could disclose that it was made by them (because MSI might not want this to be compared to their finished products when this is a prototype), whereas other reviewers didn't explicitly ask Intel if they could do so:

    "I wonder why Anandtech felt the need to conceal the system manufacturer's name."

    "They were being respectful, since prototype recipients were asked not to take pictures of innards, not do battery tests, and a few other things due to this very much not being a production laptop.

    I would have done the same, except that I specifically asked my Intel rep whether MSI would prefer to be named or not. My rep took a day to find answers, then came back and said that naming MSI was fine as long as we made it clear that this wasn't a retail system."
  • Oxford Guy - Sunday, September 20, 2020 - link

    The name of the manufacturer isn't the point.
  • wow&wow - Thursday, September 17, 2020 - link

    Two chips in a package, so it isn't a monolithic chip even with 10nm?
  • RedOnlyFan - Friday, September 18, 2020 - link

    That's soc and the pch dies. The compute is still monolithic.
  • Spunjji - Friday, September 18, 2020 - link

    But AMD have the PCH on-die... 😬
  • RedOnlyFan - Friday, September 18, 2020 - link

    Intel needed a kick where it hurts, now it's safe to put the stick back in the storeroom?
  • SplinesNS - Friday, September 18, 2020 - link

    The Sardine oil and Tiger King references make it hard for an international reader to actually make sense of content here. I am not sure who the target audience is for this website but I would kindly request you not to use very culture specific references on a technology website.
  • Bik - Friday, September 18, 2020 - link

    It is subjective, but I think it fun and I'm an international reader. Without these references the article would be too dry. I think one can still get 100% technical detail and not knowing the puns.
  • Spunjji - Saturday, September 19, 2020 - link

    Seconded.
  • Samus - Friday, September 18, 2020 - link

    This is criminal. They are going to sell a CPU of the same model and allow OEM's to have it perform vastly different without disclosing the actual performance? What's next, bring back the PR rating?
  • Spunjji - Friday, September 18, 2020 - link

    To be fair, this isn't new. Intel CPUs have differed significantly in performance depending on cooling implementation on the final product for a while, and AMD have similar issues now.
  • Oxford Guy - Sunday, September 20, 2020 - link

    How new this is is less important than the fact that it's a scam.
  • Spunjji - Sunday, September 20, 2020 - link

    It's relevant when someone's talking about it like it's a new problem..?
  • maroon1 - Friday, September 18, 2020 - link

    Clock for clock comparison is useless

    The fact that Tiger Lake beats Ice Lake at same power means than Tiger Lake is superior out of the two. Period

    Also iGPU performance boost is huge. It beats 65w APU in some cases. But there is some inconsistency because some cases it does not beat 15w APU. It might be because of drivers ??!
  • yeeeeman - Friday, September 18, 2020 - link

    Damn, willow cove is actually lower ipc vs sunny??? Wow, that was unexpected! Intel needs to improve the ipc of the next gen core massively if they want to stay on top. I know the rumours say 50% better than skylake but even that, if it will happen will not be sufficient.
  • m53 - Friday, September 18, 2020 - link

    Willow cove has ~20% better IPC than Zen2. Golden Cove is rumored to add another 25% taking the lead to ~45% by mid 2021. Will Zen3 be able to close the 45% IPC deficit? AMD says no. By their own best case projection they expect 15% IPC. That would pul Zen3 at a 30% IPC deficit vs Golden Cove. As you can see the IPC deficit keeps widening.
  • Spunjji - Friday, September 18, 2020 - link

    Yet actual performance isn't as far apart as IPC alone would indicate, because the designs differ in some fundamental ways. It'll be interesting to see how this shakes out in practice.
  • RedOnlyFan - Friday, September 18, 2020 - link

    Lol you have got your info wrong. Sunny cove and willow cove IPC is more or less the same. Sunny cove is 18% higher IPC over skylake. Willow cove performance is higher because of higher clock ~20% more. It's impossible to define what's "sufficient".
  • Rudde - Saturday, September 19, 2020 - link

    From Spec2017: Willow Cove has 15% higher integer IPC and 12% higher floating point IPC compared to Zen 2. Zen 3 should be on par with Willow Cove on IPC. Golden Cove will of course keep a gap to Zen 3.
  • zepi - Friday, September 18, 2020 - link

    You should also publish the frequency vs. time graph for some of the tests. This would make it much easier to estimate how the chips scale with TDP.
  • Sychonut - Friday, September 18, 2020 - link

    For me personally, the real star of the show are those ARM processors (especially Apple's) performing so admirably at a smaller power envelope.
  • abufrejoval - Friday, September 18, 2020 - link

    For me the biggest motivator for getting one of these in a NUC would be to play with the shadow stack, control flow integrity (CFI) and memory encryption, because the ability to run secured corporate VMs on personal home-office hardware has a lot of appeal, even if it's originally a cloud issue.

    I'd obviously want those same features from AMD and wonder where they stand: Are their VM encryption mechanisms sufficiently similar to what Intel is pushing (would such things actually be covered under their intellectual property agreements)?

    Any word on CFI from AMD? Or actual implementation of similar extensions on the ARM side?
  • vinay001 - Friday, September 18, 2020 - link

    @Anandtech, What happened to 3080 review??
  • Rudde - Saturday, September 19, 2020 - link

    CA wildfires
  • andracass - Saturday, September 19, 2020 - link

    The side ports on that reference model and the way the screen props the laptop up when you open it makes the device VERY reminiscent of the MSI Prestige.
    Like, identical.
  • Ian Cutress - Sunday, September 20, 2020 - link

    It is. Intel initially asked the press not to put too much emphasis on the OEM they partnered with, as retail units will be different and more optimized. But a lot of press straight up mentioned it in their reviews, so I guess the cat is out of the bag.
  • MDD1963 - Saturday, September 19, 2020 - link

    Although equaling/exceeding 7700K-level of performance within a 50W envelope in a laptop is impressive, the 4c/8t design is going to cause at least one or two frowns/raised eyebrows...
  • ballsystemlord - Saturday, September 19, 2020 - link

    @Ian why do these companies always seem to have the worst timing on sending you stuff? Do you tell them when you'll be on vacation?

    Thanks for the review!
  • Ian Cutress - Sunday, September 20, 2020 - link

    It's happened a lot these past couple of years. The more segments of the tech industry you cover, the less downtime you have - my wife obviously has to book holiday months in advance, but companies very rarely tell you when launches are, or they offer surprise review samples a few days before you are set to leave. We do our best to predict when the downtime is - last year we had hands on with the Ice Lake Development system before the announcement of the hardware, and so with TGL CPUs being announced first on Sep 2nd, we weren't sure when the first units were coming in. We mistimed it. Of course with only two/three of us on staff, each with our own segments, it's hard to get substitutes in. It can be done, Gavin helped a lot with TR3 for example. But it depends on the segment.

    And thanks :)
  • qwertymac93 - Sunday, September 20, 2020 - link

    Finally a decent product from Intel. It's been a while. Those AVX512 numbers were impressive. Intel is also now able to compete toe to toe with AMD integrated graphics, trading blows. I feel that won't last, though. AMD is likely to at least double the GPU horsepower next gen with the move from a tweaked GCN5 to RDNA2 and I don't know if Intel will be able to keep up. Next year will be exciting in any case.
  • Spunjji - Sunday, September 20, 2020 - link

    It'll be a while before we get RDNA2 at the high end - looks like late 2021 or early 2022. Before that, it's only slated to arrive with Van Gogh at 7-15W
  • efferz - Monday, September 21, 2020 - link

    It is very interesting to see that the intel complier make the SPECint2017 scores 52% higher than other compliers without 462.libquantum.
  • helpMeImDying - Thursday, September 24, 2020 - link

    Hello, before ranting I want to know if the scores of spec2006 and spec2017 were adjusted/changed based on processors frequency(Read something like that in the article)? Because you can't do that. Frequencies should be out of the topic here unless comparing same generation CPU's and even then there are some nuances. What matters is the performance per watt comparing low power notebooks. It can be done mathematically, if the TDP can't be capped at the same level all the time, like you did in the first few pages. I'm interested in scores at 15W and 25W. So you should have and should in the future monitor and publish power consumed numbers near the scores.
    And if you are adjusting scores based on CPU frequencies, then they are void and incorrect.
  • helpMeImDying - Thursday, September 24, 2020 - link

    Btw, same with iGPUs.
  • beggerking@yahoo.com - Friday, September 25, 2020 - link

    none of the tests seem valid... some are intel based others are AMD based... I don't see a single test where Ryzen beats 10th gen but loses to 11th gen on standard 15 watt profile...

    the speed difference between 10th and 11th gen intel is approx 10-15%.. its good, but probably not worth the price premium since Ryzen is already cheaper than 10th gen, i don't see how 11th gen would go cheaper than Ryzen...
  • legokangpalla - Monday, September 28, 2020 - link

    I always thought AVX-512 was a direct standoff against heterogenous computing.
    I mean isn't it a better idea to develop better integrations for GPGPU like SYCL, higher versions of OpenCL etc? Programming with vector instructions IMO is lot more painful compared to writing GPU kernels and tasks like SIMD should be offloaded to GPU instead being handled by CPU instruction(CPU instruction with poor portability).
  • Meteor2 - Thursday, October 15, 2020 - link

    Read up on the history of Larrabee
  • yankeeDDL - Tuesday, February 15, 2022 - link

    I just got a new laptop for work, with the i7-1165G.
    At home we have two laptops with Ryzen (a Zephyrus with 4800HS and a Lenovo Ideapad with 5500U).
    After 2 months with the 1165G I feel compelled to post a note on this 1.5y-old thread to share what immense piece of garbage the 1165G is.
    I was coming from a 6-years old Toshiba with Intel's i7-5500u (what a coincidence, no?) and the 6-years performance jump is ridiculous. The laptop with the 1165G is a hot mess, with barely more than 4hrs real-life battery even in full battery-saving mode. In 2022 I find it not acceptable for a 1400eur laptop, especially, as the Lenovo is snappier, has >6hrs battery, 2 more cores and half the price.

    What a major rip-off Tiger Lake is.

Log in

Don't have an account? Sign up now