Comments Locked

204 Comments

Back to Article

  • haukionkannel - Monday, October 4, 2021 - link

    Well A14 was already very fast so making A15 more efficient was right call! IMHO
    I need longer battery life and if these changes give that. The better!
  • herozeros - Monday, October 4, 2021 - link

    Granted I’m coming from an Xs, but battery on my 13 pro is nothing short of incredible.
  • 13xforever - Monday, October 4, 2021 - link

    Same, upgraded from Xs to 13 Pro and now I charge once every two days on average, and Xs had amazing battery life compared to 6s before it as well, so let's hope the trend continues.
  • Kangal - Monday, October 4, 2021 - link

    I think ProMotion takes its toll, but the new (LTPO) display should be more efficient. And the previous 5G antenna has probably been improved for better reception and less power drain. Combine these with the efficiency gains from the CPU/GPU and it adds up. Then there's the noticeable increase in battery capacity.

    All in all, I'm expecting the iPhone 13 models to have slightly less battery life than their iPhone 11 and iPhone 12 devices... but do it all while being faster and using the display in 120Hz Mode. And when it comes to certain Apps, I think it will scale down to 60Hz or 30Hz or even 1Hz, and save plenty of power... which should see the iPhone 13 embarrass its siblings. These were my predictions during the announcement, before this article.

    But Andrei says he's got all the data, so we will see if my predictions/deductions were even remotely close in the coming week in the next article.
  • Trackster11230 - Monday, October 4, 2021 - link

    I'm expecting the battery life to be better than both. The LTPO and reduction in screen Hz should enable better battery savings (I'm assuming more time is spent at 10 Hz [the lowest; it can't do 1 Hz], than at 120 Hz).

    Either way, it's been a significant jump in battery life from my X.
  • markiz - Monday, October 11, 2021 - link

    Incredible compared to XS, or in general?
    Why do iPhones have such poor battery when put through a standardized workload, like e.g. gsmarena?
  • Wrs - Thursday, October 14, 2021 - link

    As GSMarena wrote in the iPhone 13 review, it's primarily because of low call/standby efficiency. That's understandable because the cellular chip is discrete; most the comparison phones have it integrated and some stick to LTE only. Inasmuch as 89h for the normal 13 is over 3 days and nothing to complain of, their modeling for 1 hour each of calls, web surfing, and video each day is also a tad unrealistic for many. With more active use on wifi the battery life becomes way more competitive
  • michael2k - Friday, October 15, 2021 - link

    Anandtech's standardized workload disagrees with you:
    iPhone 13 Pro from 2021: 16.62
    iPhone 11 Pro from 2019: 13.33 hours
    Xiaomi Mi 10 Pro 2020: 12.96 hours
    iPhone 12 Pro from 2021: 12.95 hours
    iPhone XR from 2018: 12.95 hours
    Galaxy S21 from 2021: 12.86 hours
    iPhone 12 from 2020: 12.53 hours
    Xiaomi Mi 11 Ultra 2021: 11.70
    Xiaomi Mi 11 2020: 11.63 hours

    https://www.anandtech.com/show/17004/apples-iphone...
  • deil - Monday, October 4, 2021 - link

    I am still not happy about their battery life, as andro phones get 150% more even on cheap devices.
    when they give you 3500 mah, and ando can get up to 18000 mah, something needs to be said.

    Interesting result from S21U, 2.9W for 75 frames, seems like very good result, is it locked by something ? that chip can go just as high as a15 can, wattage-wise, what's happened there ?
  • Stuka87 - Monday, October 4, 2021 - link

    Deil, not sure where you saw a phone with an 18,000mAh battery, it would have to be the size of a tablet to get anywhere close to that big. Even the Galaxy Note, which is pretty large, has a 4000 mAh battery. iPhones consistently score well in battery life tests, even against Androids with much larger batteries.
  • Calin - Tuesday, October 5, 2021 - link

    There are plenty of Android phones with batteries around 5,000 mAh. However, I only remember one at 6,000 mAh and none higher than that.
    All in all, anything more than about 18-20 hours of actual battery life is excessive (considering the size and weight penalty versus actual time you could use it between recharges).
  • gund8912 - Monday, October 11, 2021 - link

    I care about battery line not size of the battery.
  • markiz - Monday, October 11, 2021 - link

    They really don't. They have been at the bottom of any list for many years now, compared to other high end phones, and even more so compared to some cheaper phones.
    13 series does seem like a significant improvement though.
  • michael2k - Friday, October 15, 2021 - link

    Very significant:
    https://www.anandtech.com/show/17004/apples-iphone...
  • akdj - Wednesday, February 16, 2022 - link

    ???
    I know I’m late to the party but whaaaa…?
    As an owner of both, and since 2007 - it’s been a long time since I’ve seen/had/read about Android beating iPhone in any sort of battery or power longevity test… in real world usage.

    And I believe there’s a limit to the size of battery allowed in a cell phone if you need or want to fly with it.
    If I recall correctly it was, maybe still is 4,000mAh but I could be wrong

    That said, the 11, 12, and now 13 Pro Max’s I’ve owned have eaten the S20, S21 Ultra and the Pix6 for lunch when it comes to measuring battery life.
    I only mention these because I owned them simultaneously with each other and used them similarly so I speak from experience
    That said, I don’t think it’s hard to believe a low powered cheap/burner Android phone has decent battery life. It probably forgoes a lot of radio power, cell bands and Wi-Fi options/radios. No NFC, LTE only with just local frequencies, maybe even 3g. A/B/G, maybe N Wi-Fi, small display and no always on. No storage for apps that eat your bandwidth and in turn your battery. Dim displays, tired SOCs, little RAM, little storage, and with the lack of power storage and memory comes the lack of desire to play games, buy decent apps, or even watch movies.
    So sure, you might get a couple extra standby hours on a cheap Android but not a flagship option. I think my S21U had 12GB RAM! That’s insane and needs continued low voltage to maintain the info on your immediate random access memory!

    The only options I’ve seen that truly do compete with the iPhone energy wise are either not available in America (only our problem), in some case Europe or are just not readily available in many western carriers.
    And the flagships are usually neck and neck until the recent iPhone updates (since it seems the X/XS or 11 series) where they have just taken over with few exceptions that are hard if not impossible to find - even worse to get support.

    Apple’s chip design is a massive achievement, and I believe one we’ve just begun to see the fruits of that labor. EG; M1/M1 Pr/M1 Max and soon to release M2’s architecture and the scaling used from the iPhone silicon to the Mac Pros.

    Just the beginning… and if you want to game, all ya need now is an Xbox, leave the rest of the Wind-blows in the rear view (I’m playing Microsoft Flight Simulator 2021 on the series X and I have a desktop rig with a 3080. I notice almost no difference between them - Other than options for third party upgrades, liveries and high density airports - I believe will be here sooner than later on Xbox. As it, too, is running Windows at its core.
    Why chase three and four thousand dollar gaming rigs when they offer a $500 box that every Tom Dick and Henry the developers will be targeting the next several years?
    Makes little to no sense. Not even in productivity or code compilation.

    Sure, Alder Lake unlocked and a 3090 ($2500-$3000 if you can find one, GPU only) can best the MacBook Pro M1 Max, but not by much (games aside, remember the Xbox) in the benchmarks or actual software that was tested for making and printing money - ala Photoshop, Premier and After Effects, Audition and FCP/Logic - massive spreadsheets or huge RAW file batch processing in Lightroom, not only does the M1 crush in the laptop realm but it’s a monster on the iPhone 12/13 and the 2021 iPad with M1! And you don’t need to worry, your battery will last all day
  • Ppietra - Monday, October 4, 2021 - link

    You can see that S21U at 8.3W only achieves 120 fps, almost the same as the A15 at 3W.
  • melgross - Monday, October 4, 2021 - link

    You seem to be just making things up. Apple’s new phones are getting excellent battery life according to every review, with the 13 Pro Max being given the top rating for battery life.

    18,000mAh battery? What phone has that?
  • emn13 - Monday, October 4, 2021 - link

    The largest battery I can find is the one: "doogee s88 pro" which is "just" 10000mAh.

    However, even if deil is exaggerating, there is a kernel of truth to the battery life complaints, and that's due to idle power draw, which apparently is not as great.

    gsmarena (link unfortunately removed to satisfy the absurd anandtech spam detector) has a battery life estimator, based on a sliding scales of how much legacy calling/web browsing/video watching you do, but critically it includes the standby power draw, and apple devices are nowhere near the top. The best rated is still the 11 Pro Max, which beats the 13 and 13 pro by a considerable margin.

    But even vs. more normally battery-endowed androids (which often still tend to have 4000mAh+ batteries) although the iphone fare OK, they're nothing special. The
    Sony Xperia 10 III scores particularly well (probably because it's much slower), at 137h vs. the iphone 13's 89h.

    But you can fiddle the sliders to your hearts content. The higher your active usage, the better apple does, but even the most extreme users will likely see longer battery life on the more efficiently tuned androids.

    Course, there are also androids that do a lot worse, mind you; it's just that apples phenomenal efficiency under load doesn't translate trivially to battery life because the batteries are still quite small, and perhaps because standby draw isn't anything particularly great.
  • The Garden Variety - Monday, October 4, 2021 - link

    There actually was an 18,000mAh phone. It was made by Energizer (yeah, that one) and called the Power Max P18K Pop. Announced in 2019, offered for sale only in Europe, dunno if it's still available. Some quick Googling would say no. Oh, and it was the size of a particularly beefy paperback book.
  • FunBunny2 - Monday, October 4, 2021 - link

    Oh, and it was the size of a particularly beefy paperback book.

    dredge up a lower brain stem memory, OK! there was a flashback scene in a late "X Files" where Mulder pulls one of the vewy, vewy early mobiiiile phones from his trench coat. you remember, the ones the size of a baseball bat?
  • Daka - Tuesday, October 5, 2021 - link

    It has been tested and iPhone 13 pro max tops Samsung by 1h. Battery is bigger this year as well to over 4300mAh. So as far as it used to be true it isn’t anymore thanks to both increased battery capacity and market leading efficiency
  • Linustechtips12 - Wednesday, October 6, 2021 - link

    one thing I think is fair to mention is that Apple does release their phones in basically mid-September to and the Samsungs generally release in January or February i generally compare something like the soon s22 to the 13 pro because once the new galaxy comes out and is compared to the newest iPhones generally the androids or galaxy does do better or is within about 10 minutes either way.
  • michael2k - Monday, October 4, 2021 - link

    The Moto G Power 2021 has a 5000mAh battery for $249, and gets a 14 hour run time, one of the best out there. 150% would suggest 18 hour run time, and I don't see any indication of that being true. But we can say you're exaggerating and give you the point, and say the cheap $250 Android phone gets 118% battery life; at least it's $850 cheaper. Note you're getting a 720p display on a 6.6" screen, which surely helps with the battery life.

    The 13 Pro Max gets a 12 hour run time, just so it's clear, and outclasses the Moto G Power in CPU performance, GPU performance, camera, and display. If you don't need the extra performance or visual quality then it's a great deal, since you're also getting a few hours of battery life for the difference.
  • artifex - Tuesday, October 5, 2021 - link

    That's actually the phone I use. The 4/64 model. My MVNO had it on sale for $89. As a practical matter I don't notice it's only 720P, even coming from a 1080P phone. This screen doesn't wash out in sunlight like my Moto G6 did, and I can even use it with polarized lenses, which I couldn't do before. It's got enough power to run Genshin Impact, though that eats the battery quickly. But since I suck at phone games, I can let it go without charging for a couple days. Oh, and it charges fast, too; I think it's a 15W charger? The camera isn't exciting, but it works well enough. It's quite the bargain overall. And yes, it's got a headphone jack, and a microSD spot in the sim card slider thing.
  • tonidigital - Thursday, October 7, 2021 - link

    18000mAh is fake… Here you can see the battery compared to a 5000mAh of a Samsung S21…
    https://www.youtube.com/watch?v=I41ntOTJsO4
  • Byte - Monday, October 4, 2021 - link

    I usually do a 3 year cadence on phones and went from an iPhone X to a 12 pro last year. After 10 months my 12 pro already is at 85% battery health so I was looking to try get it swapped out by warranty, or even pay the $69. But Best buy had a crazy trade in deal for $100 to upgrade to the 13 Pro, which is a no brainer when you factor in the new battery. So far after a week, the battery i feel is about 30% better than my degraded 12 Pro, which is pretty much in line when what is advertised. I feel that if Apple gives a 25% battery improvement ALONE year on year, it would drive me to annual upgrades lol.
  • Prestissimo - Monday, October 4, 2021 - link

    I'm just waiting for MacBooks with M1X to arrive...
  • Alistair - Monday, October 4, 2021 - link

    Same. Will Apple give us 16GB/500GB and a GPU for the base price? Or will they just take all the cost savings from ARM and jack up the price on the consumer and I'll continue to pretend Apple doesn't exist... Can't wait to find out.
  • headeffects - Monday, October 4, 2021 - link

    The existing higher end 13.3” and 16” MacBook pros are 16GB/512GB so I think it’s pretty much guaranteed. I don’t think there will be a price increase either. GPU is something I doubt, but it’s not something I even want. dGPUs are pretty finnicky in laptops and if the high end M1X and it’s rumored 32 GPUs cores beats the existing high end graphics I don’t see why it would matter. It would be faster and more efficient on battery to boot.
  • Alistair - Monday, October 4, 2021 - link

    we should get 16GB/500GB for the base macbook pro, then pay extra for the integrated GPU, that's the GPU i was talking about
  • varase - Tuesday, October 5, 2021 - link

    According to rumor there will be a 16 and 32 core GPU .... I expect if it it stays at the same price it will come with a 16 core GPU.

    Remember, the 14" and 16" will have the same capabilities so I would expect a price bump since the old high end 13" was nowhere equivalent to the 16".
  • headeffects - Sunday, October 10, 2021 - link

    This is just speculation though, we still don’t know. Personally I think the 14” will get 16 cores and 16” will get 32 cores.
  • Daka - Tuesday, October 5, 2021 - link

    Me too, my M1 with 8gb is bit too little
  • varase - Tuesday, October 5, 2021 - link

    You could have gotten that with 16 GB, you know.
  • Pneumothorax - Friday, October 8, 2021 - link

    Even with 16gb, by MacBook Pro M1 still swaps like crazy with my workflow.
  • Oxford Guy - Monday, October 18, 2021 - link

    The iMac is limited to 8 last I heard. Horrid.
  • Masterpgsxr - Wednesday, October 27, 2021 - link

    You heard wrong, as usual with any of your responses on Mac products. 16gb.
  • easp - Tuesday, October 5, 2021 - link

    M1X? That implies that it's a variant of the M1, using the same cores, which are at this point a generation old. We'll see if Apple skips generations on their "desktop" CPUs. I hope not. I hope they keep pace with the iPhone variety.
  • 5j3rul3 - Monday, October 4, 2021 - link

    The Best Toothpaste processor in the world.
  • williwgtr - Tuesday, October 5, 2021 - link

    Fast for 10 minutes when playing more than that time is pure lag
  • BillBear - Wednesday, October 6, 2021 - link

    You realize the sustained performance numbers are right there in the article, right?

    Samsung's flagship getting half the sustained performance is a poor showing for Apple?
  • Ppietra - Monday, October 4, 2021 - link

    So, can conclude that these performance cores are actually new cores?
    Or did they obtain the increased efficiency through other means, like bigger cache, improved manufacturing and better voltage gating?
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    They are new, yes.
  • Ppietra - Monday, October 4, 2021 - link

    thanks!
    It is a big increase in efficiency, though it would seem performance cores IPC doesn’t increase much! 5% maybe?
    for a second I actually thought that the SPEC CPU 2017 scores were comparing Apple’s performance cores with the snapdragon!!!! That is impressive performance from the efficiency cores.
  • name99 - Monday, October 4, 2021 - link

    "New core" is a somewhat meaningless term :-(
    That is it can mean whatever you want it to mean.

    As far as we can tell right now, this is like an "Intel-level new core" (ie the sort of changes we have seen from one Cove to the next), so possibly some changes in the number/size of units.
    (Andrei mentioned 4 rather than 3 integer units for the E core) but probably no serious change in the algorithms used by the design.

    It is possible that some chicken bits were switched off so that functionality that was designed into the A14 but disabled (ie it failed in some unusual circumstances!) is now working. For example, as far as I could tell, none of the three Zero Cycle Load accelerators described in various patents were working in the M1, but it would be nice if we see them active in these P and E cores.

    This is the sort of thing that is much easier to investigate on macs than on phones, so we need to wait for new Macs (and then time to investigate carefully) before we can be sure.

    Another way you can ask the question is: is there new functionality here? And the answer to that appears to be yes, for example some hypervisor improvements and (apparently) larger physical address. But these CORE-SPECIFIC (as opposed to general SoC) improvements are small and not very visible.
  • Ppietra - Monday, October 4, 2021 - link

    New core as in something that is actually changed in a meaningful way, and not just an overclocked version of an A14 core.
    New functionality is a meaningful change, even if it’s small in importance. Higher efficiency would also be a meaningful change, though it’s not easy to know how much of it is a result of an improved core.
  • misan - Monday, October 4, 2021 - link

    In that sense, yes, it’s a new core. The caches have been increased, it is now more power-efficient and there are additional new features as mentioned above.
  • Ppietra - Tuesday, October 5, 2021 - link

    L2 and SMC sizes are not technically part of the core design. And like I said power efficiency can increase by many different factors, it isn’t an absolute proof.
  • cha0z_ - Monday, October 4, 2021 - link

    I know you kinda inclined in the article given the PCB design, but still - won't we see better sustained performance in the bigger 13 pro max model? Maybe even apple on purpose allow higher sustain power consumption vs the smaller pro model?
  • cha0z_ - Monday, October 4, 2021 - link

    talking especially about the GPU, because basically 90% of the people who are serious about gaming on their phone will get 13 pro max instead of the regular 13 pro for both the bigger display and the far better battery life.
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    I didn't see much difference on the Max. The issue is chip to body dissipation, not total body to ambient dissipation.
  • cha0z_ - Monday, October 4, 2021 - link

    Yeah, guessed that much, but still had hopes. Basically the sustained GPU performance is just a tad higher vs my 11 pro max and I am kinda sad about it even with all the other improvements. :(

    There are super good, but GPU demanding games like x-com 2 WOTC, not to mention for the 120Hz scenario, but even if more efficient the FPS number when you play for more than 10m will be indistinguishable if no FPS counter is visible.

    Correct me if I am wrong and if not a big hassle given I really respect your opinion and work + you have experience with 11 pro max also - do you think it's a decent overall upgrade (simple yes/no will do. I am power user + got 2233rz 120Hz at launch :) ). Especially by feel how do you compare them in gaming?

    Also cheers for your great articles and deep dives! Love them all!
  • repoman27 - Monday, October 4, 2021 - link

    Not arguing one way or the other as to the merits of Apple's thermal solution, but the side of the A15 package which faces the interior of the PCB sandwich is a PoP with 4 SDRAM dies in it. The business side of the SoC is attached to a very thin PCB via InFO. The opposite side of the PCB in the region where the SoC is located has very little active circuitry other than the audio chips and secure element. However, it does have a can with thermal pads to help transfer the heat from the SoC upwards through the screen.

    In other words, I believe most of the heat from the SoC is radiated upwards through the screen / top of the device, while the heat from the modem / RF transceiver chips is radiated through the back glass / bottom of the device.
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    We can theorize, but at the end of the day it's got far lower sustained power than any other phone and there are thermal issues that Apple has encountered several times now, some not addressed in articles.
  • repoman27 - Monday, October 4, 2021 - link

    I have no idea if Apple made good decisions regarding thermals in this case or not, and I'm glad you're investigating / reporting on the topic. However, by constantly pushing density further than everyone else and using technologies like InFO and substrate-like PCBs, Apple may be solving for a slightly different set of problems than their competitors.
  • teldar - Wednesday, October 6, 2021 - link

    It's not really Apple pushing density. It's the processor manufacturer. That's a little misleading.
  • Ppietra - Wednesday, October 6, 2021 - link

    teldar, it’s both! It is up to Apple to decide which node it wants to use.
  • Spunjji - Friday, October 8, 2021 - link

    @Ppietra & teldar - I think repoman27 meant "pushing density" in terms of PCB layout and design, rather than the node the CPU is manufactured on.
  • unclevagz - Monday, October 4, 2021 - link

    How does the Spec 2017 performance here compare against x86 (Zen 3/RKL)?
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    Comparative subsets would 5950X 7.29 int / 9.79 fp, 11900K 6.61 int / 9.58 fp. versus 7.28 / 10.15 on A15.
  • unclevagz - Monday, October 4, 2021 - link

    Thanks, since Anandtech does have data on Spec 2017 subtests with various x86 processors it may also be helpful to show these results for selected x86 CPUs in the displayed graphs for ease of comparisions.
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    I thought about it but didn't want to complicate it too much given the power disparity.
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    I added in performance marks for the x86 folks. Obviously no power data.
  • Kangal - Tuesday, October 5, 2021 - link

    Hey Andrei,
    The graphs for Spec-2017 Efficiency looks quiet off. It's showing the Cortex-A55 cores consuming considerable more energy than Apple's E-cores, and sometimes even more than the Cortex-A78 cores too. Whilst performance seems as expected.

    The worst offender seems to be the 544.nab_r, with the a discrepancy of 0.60 perf / 682 J = ~0.001 p/J compared to the 2.70 perf / 280 J = ~0.01 p/J. So that's an efficiency difference of ~x10 which is massive. And the best case for the A55 seems to be in the 541.leela_r test. Here we have 1.00 perf / 295 J = ~0.003 p/J compared to the 2.49 perf / 264 J = ~0.009 p/J. So in this best-case scenario the efficiency difference is ~x3 which is still huge.

    I mean, I remember when Apple's E-cores were running slightly slower than the Cortex-A73 whilst using slightly more power than the Cortex-A53. But what we have here is just ridiculous. We have even less power draw than the Cortex-A55 or even the Cortex-A53, but performance is somewhere above the legendary Cortex-A76.

    I can't wrap my head around it. It feels like an impossibility. Is my maths checking out? Or does there seem to be an issue someplace in the data?
  • Andrei Frumusanu - Tuesday, October 5, 2021 - link

    Perf per joule is a bit of a weird metric that is superfluous, you want either perf/W or simply just Joules consumed for energy efficiency, so either 0.60 / 0.24W = 2.5ppW & 2.7 / 0.45W = 6ppW. You can argue about power curves and ISO-perf or ISO-power.

    In any case, the other thing to consider is that we're not just measuring the core, we're measuring the efficiency of the whole SoC, power delivery, DRAM as well. Some vendors aren't running things as efficiently as they should be, that's how you end up with those Exynos A55 results, contrasted for example to the MediaTek A55 results.
  • Kangal - Wednesday, October 6, 2021 - link

    I didn't know that, I thought we had the software just churn out how much power the module was using on its own. With that said, I don't think it would be a factor. Apple doesn't have anything special in the makeup of their silicon to make it more efficient than competitors. And even if they did have a notable advantage in the make-up of their silicon, this would be against something like a RockChip SoC, and not against a flagship Qualcomm SoC. The more feasible explanation would be that the QSD chip might be activating other co-processors like it's NPU, and the task isn't actually being hardware-accelerated by it, but "software-encoded" by its targeted CPU (eg A55). Thus its still running slow, but now its wasting power by having other co-processors become active and not actually compute anything.
    .....Would something like this be a cause for concern, for future testing?

    Secondly, I used the Joules as that's what the graph was visually showing. I basically used it to find the best-case and worst-case scenario. I didn't really think hard about it. Since you've graphed it, and since you've recorded it, I figured you knew something that I didn't and prioritised Joules over Watts.

    Converting them to Watts, we instead get:
    (nab_r) 2.70/0.45 = 6.00 vs 2.50 = 0.6/0.24 ---> a difference of x2.4
    (leela_r) 2.49/0.40 = 6.23 vs 5.56 = 1.00/0.18 ----> a difference of x1.1

    But now, the graphs themselves need to be switched. For instance, the New Worst-case scenario is now: 520.omnetpp_r (~x3.4) from what I can see. Maybe I'll go through these benchmark figures properly on a weekend or something, unless you guys plan on doing something of the sort.

    So yes, these ranges do seem more reasonable. For starters, here we see the "IceStorm v2" cores are actually using about double the power of the "Cortex-A55" on half of the tests. This shatters my previous impression, that Apple's small cores were faster-than Cortex-A73 and used less-power than Cortex-A53. And that fits much neater into our general understanding about them, comparing small in-order cores, versus medium out-of-order cores.

    Can we change how the graphs are displayed from now on? Plot the Watts on the Right/Second x-axis instead of Joules. Or better yet, let's just strip out Joules entirely. I mean the third graph, the Energy-Axis should probably be deleted, and just keep the Power-Axis there instead? No?
  • Ppietra - Wednesday, October 6, 2021 - link

    Kangal,
    Joules will always be the most correct parameter to assess efficiency, since it is the actual energy expended to do all the work.
    Power, on the other hand, can fluctuate through time while doing the work, so the Power value can be very deceiving, firstly because it might not be the actual average power usage, secondly because you need to do another calculation to actually measure efficiency.
  • Kangal - Wednesday, October 6, 2021 - link

    Do you know how they calculated the Watts? And how they calculated the Joules?

    To me, Watts makes much more sense in this context/comparison. Joules is more "universal" measurement, and it might be useful in a niche, but I feel like it could me mis-used/abused easily when put out of context.

    How do we explain the HUGE discrepancy in the measurements between Watts and Joules? There is something else here I am not understanding.
  • Ppietra - Wednesday, October 6, 2021 - link

    For that you need to understand what is Power and what is Energy.
    If there is one parameter that can be misused to assess efficiency while doing a task it’s Power not Energy. What you don’t seem to account for it’s the Time variable that affects how you can interpret Power.
  • michael2k - Wednesday, October 6, 2021 - link

    I wanted to specifically bring something up:
    Apple doesn't have anything special in the makeup of their silicon to make it more efficient than competitors.

    A14: TSMC 5nm (N5)
    A15: TSMC 5nm (N5P)
    D1200: TSMC 6nm (N6)
    SD888: TSMC 5nm (N5)

    Technically Apple is one year ahead of Qualcomm and two or so ahead of MediaTek in terms of process.

    Looking at the SPECin2017 Power Axis graph we see on average that the A15 IceStorm v2 consumes 0.44W/2349J to achieve a 2.42 score, which puts them on par with the D1200 A78 with it's 2.57 score, but at far higher power cost of 1.13W/6048J

    In other words the A78 and A15 have very similar performance, which makes sense since there are many similarities in terms of number of execution units, width, etc. If you look at the older style charts you can see that the efficiency cores were far closer in performance to the A76 'performance' cores on the Kirin 990:
    https://www.anandtech.com/show/14892/the-apple-iph...
    https://www.anandtech.com/show/14892/the-apple-iph...

    Long story short, there doesn't seem to be any surprises. Apple has a process advantage, uses cores similar to ARM's performance cores for efficiency purposes, and does so by clocking them at 3/4 the speed to dramatically reduce the power draw. The A15e only hits 2.016GHz and the A14e maxed at 1.823GHz, and the A13e at 1.728GHz
  • Ppietra - Wednesday, October 6, 2021 - link

    michael2k,
    process node differences cannot account for the degree of difference in efficiency. N6 to N5P would account for less than 30% reduction in power at the same clock speed (using same design).
    What we observe is a 60% reduction in energy vs the D1200 while achieving almost the same performance at a lower clock speed. Apple’s design has a better IPC while consuming much less. Even if we were to compensate for process node advantage we would still have around 40% reduction in energy consumption.
    Anyway, Andrei has mentioned that these power consumption values are also affected by other things, not just the CPU cores, so clearly Apple is doing a better job at keeping power under control.
  • Fulljack - Thursday, October 7, 2021 - link

    Snapdragon 888 are fabricated on Samsung 5nm LPE and NOT TSMC N5 (5nm)
  • erinadreno - Monday, October 4, 2021 - link

    Not exactly a fan of Apple's design choice of mobile phone SoC. Yes the performance is good, but no integrated modem, hence larger PCB and/or smaller battery. They basically trade area with performance. Due to the lack of peripherals and smaller screen, it's difficult to utilize the full potential while still burning the energy.

    The same design philosophy (M1) is a lot better on tablet and laptop where performance is less likely to be wasted.
  • melgross - Monday, October 4, 2021 - link

    That’s not an Apple design choice. That’s a Qualcomm limitation. Qualcomm won’t allow Apple (or anyone else) to integrate their modems on a chip not made by Qualcomm. Nevertheless, Apple’s overall phone designs are still more efficient.
  • michael2k - Monday, October 4, 2021 - link

    1) That's like complaining about Intel's lack of integrated NVIDIA GPU
    2) They increased the battery size year over year, so that claim is false
    3) They aren't trading area with performance, as indicated by their sandwich PCB, they're trading heat dissipation for reduced performance; the heat dissipation of the CPU + GPU is what limits the performance, but that also probably also helps with increased battery life too
    4) Peripherals? WTF are you talking about
    5) Smaller screen? WTF? 5.4", 6.1", and 6.7" aren't generally smaller screens, especially when their performance is unrivaled, their battery life is good, and their energy efficiency is some of the best out there

    It's like you commented without even reading the article!
  • erinadreno - Tuesday, October 5, 2021 - link

    They increased battery size year over year, yet still beat by pretty much any other smartphone. The area means ASIC area and package size, Apple could step back their design with smaller die size and lower performance, like 1+3 CPU and 3 cluster GPU config for mobile. The sandwich PCB is the indication of trading area with performance. They just circumvent the area problem by having thermal issue. Peripherals are the hardware/software platform around the processor, OS, IO devices, etc. iPhone pretty much lack any real application (not apps) where this much processing power is needed, other than playing games for 5 minutes. I'd agree 6.7" is a large screen, but the other two are definitely small in today's market.
  • jospoortvliet - Tuesday, October 5, 2021 - link

    How are they beat by almost every other phone? Their battery life is in the absolute top. Sure they achieve that with a smaller battery and thus lower weight and size - to the benefit of their customers - but hey, that's what it means to be perf per watt leader.
  • markiz - Friday, October 15, 2021 - link

    Meh. iPhone never had good battery life. In fact, it was terrible for many years. And they had slow charging. Still do.
    That said, Galaxy S line also had until S21 poor battery life, about the same.
    But there are numerous other high/er end phones that fare much better. I guess in USA only iphone and galaxy matter so you look at it differently.

    It was also heavier slightly.
    e.g. iphone 11 194g, S20 163

    So there was no benefit to the customer in these regards.
  • cha0z_ - Tuesday, October 5, 2021 - link

    my 11 pro max already sh*ts on all top current android phones and 13 pro max sh*ts on my phone on top of all top current gen android phones. They do it with smaller battery? This is not a bad thing, will let you guess why by yourself.
  • markiz - Friday, October 15, 2021 - link

    Iphone 13 has worse battery life then S21.
    pro max 13 has slight advantage, but galaxy is a half step behind. Maybe they can do better with s22. It's not at all correct to say that it shits on them in this particular regard.
  • repoman27 - Monday, October 4, 2021 - link

    By 2023 Apple SoCs will likely include integrated 5G, seeing as they spent $1B to acquire Intel’s modem division. Until then, Qualcomm discrete is really their only option.
  • cha0z_ - Tuesday, October 5, 2021 - link

    As said - no integrated modem is entirely doing of qualcomm. Won't last long tho, apple are already deep into designing their own 5G modem ;)
  • 5j3rul3 - Monday, October 4, 2021 - link

    Will anandtech review iPad mini 2021 and iPad Pro 12.9 2021?
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    Currently we have no plans on the iPads, no.
  • 5j3rul3 - Monday, October 4, 2021 - link

    Thank you!
  • name99 - Monday, October 4, 2021 - link

    I would be curious if you at least ran the latency tests on an M1 device, to compare.
    That would allow us to perhaps understand how the L2 is split.

    Right now one can imagine at least three possibilities:
    - drowsy cache with three or four segments (usually you do this by sleeping some fraction of the ways), so that as you go larger some fraction of the time you are hitting a drowsy segment more often and taking an extra cycle

    - virtual L3. ie each core gets half the L2, and some fraction (again likely by way) of the L2 "attached" to the other core is treated as virtual L3

    - your hypothesis for the A13 that some fraction of the L2 was (either absolutely, or effectively in terms of the heuristics used) locked to use by the E cores

    If we has curves for M1 (with 4 rather than 2 P clients) the relative fractions at each size might serve to stengthen vs weaken among these options.
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    We had run latency on M1 when I still had it; https://images.anandtech.com/doci/16252/latency-m1...

    It obviously looks quite different. I've determined before that Apple does some logical partitioning of the caches, it's a bit hard to measure one core while the other does something.
  • name99 - Monday, October 4, 2021 - link

    Thanks for the plot!
    That seems to show jumps at 3MB and 6MB, which does suggest a per-core split (whether logical or physical, who knows; does the question even have any real meaning?).
    I can make up a model for it (each cache gets 3MB of L2, other core's L2 can be used as virtual L2, each of the 3MB is split into three segments that are independently drowsy) which kinda fits what we see, and which one can kinda retrofit to the A14 graph.

    I'm always loathe to blame "energy saving" for weird anomalies; in this case drowsy cache. But it's not a completely crazy hypothesis. On the other hand, we know that the SLC is also drowsy (thought at a rather finer granularity) and yet we don't see an obvious jump signature of drowsiness there (though maybe we wouldn't, given the fine granularity; just a steady ramp in mean access time?)

    I could imagine that the way the split works is something like half the tags, and so half the way's are "allocated" to one core rather than the other. If you find the result in "your" tag lookup, great; if not, lose a cycle and look in the tags of the other core(s)? Would mostly work well, uses lower energy, and you only have to pay the occasional extra tag lookup(s) when you're sharing data or code with another core.

    This would imply that you could see a signature of the effect by investigating how many ways the cache presents. It should appear to present say 4 fast ways and 4 slower ways (or 3 fast ways and 9 slower ways for 4 cores). One more thing to add to the list of stuff to experiment with!
  • 5j3rul3 - Monday, October 4, 2021 - link

    Hope there's display efficiency measurements for iPhone 13 and iPhone 13 pro's display

    And, I'm so curious that why there's no 60 Hz VRR smartphones?
  • 5j3rul3 - Monday, October 4, 2021 - link

    The iPhone's VRR is interesting I think, and hope some detailed analysis on it.
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    I have the display efficiency and battery life data, will write it up next days.
  • 5j3rul3 - Monday, October 4, 2021 - link

    👍👍
  • michael2k - Monday, October 4, 2021 - link

    There is probably a cost benefit trade off being made for VRR; competing solutions switch between 60 and 120 for example, or even down to 48, and there might not be enough benefit at that level to add it to the non Pro models.

    Now Apple’s solution goes down to 10Hz, and it’s possible the screen or display hardware to support that just cost to much. LTPO might make it into a future iPhone 14 or 15, but at that point that also allows for 120Hz as well.

    https://techunwrapped.com/iphone-13-what-is-ltpo-s...
  • Tigran - Monday, October 4, 2021 - link

    Any ideas why Xiaomi Mi 11 (14.6 fps) differs so much from Xiaomi 11T Pro (19.32 fps) in 3DMark Wirld Life Unlimited (sustained)? They both have Snapdragon 888, don't they?
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    Because shenanigans; https://www.anandtech.com/show/16935/the-xiaomi-11...
  • Tigran - Monday, October 4, 2021 - link

    Thanks a lot, I missed that article.
  • snowdrop - Monday, October 4, 2021 - link

    Does the iPhone 13 Pro Max gpu throttle any less than the 13 Pro?

    The chassis is bigger, but it seems like the component design issue preventing effective heat dissipation that was mentioned in the article would affect both phones.
  • cha0z_ - Tuesday, October 5, 2021 - link

    imho I suspect it will perform a little bit better, but nothing worth writing home about. (literally +2-3% performance uplift style of little).
  • syxbit - Monday, October 4, 2021 - link

    Great writeup.
    Please do the same for the Google Tensor SoC. They've been touting it, and it's very, very unlikely to come anywhere near the perf of the A15.
  • Andrei Frumusanu - Monday, October 4, 2021 - link

    It's planned once we can buy the devices.
  • jarumbo13 - Monday, October 4, 2021 - link

    A real Apple only has one core and more pips than are mentioned in these specs.
  • melgross - Monday, October 4, 2021 - link

    Will you be able to test any of the other components in the chip? It seems to me that year after year, Apple is subordinating what were CPU functions to their other subsystems, such the neural engine, machine learning modules, ISP, etc.

    It could be why Apple seems to be looking at bigger advances in these areas when compared to the CPUs.
  • chief-worminger - Monday, October 4, 2021 - link

    Excellent article, as objective and comprehensive as ever. Does anyone know what the general estimated % edge the N5P and/or N5 nodes have over Samsung's 5LPE (or any more recent) node? I'm trying to imagine an X1/X2 core might be able to achieve on the various nodes, just to try to imagine what the next couple of year's worth of potential is.

    What might a Cortex X2 on N5P score on Geekbench5, for example?

    Also anyone know when Samsung's 3GAE or TSMC's next (3nm?) node is going to be available on flagship SoCs?
  • tipoo - Monday, October 4, 2021 - link

    I used to get excited by silicon improvements, I guess I still do to a lesser extent, but now it's like, all this buildup and what am I going to use it for? I still use the browser, Messages, some social media, and that's pretty well that.

    Nothing on Apple Arcade really drew me in more than your typical mobile fare. What I'd really like to see them do now is lean into the controller grip setup and fund some AAA tier exclusives for Apple Silicon, including Apple TV, iPads, iPhones, and Macbooks in that definition. Something that really stretches that silicon muscle, maybe A12+ required, the Switch is far weaker than any of these by now but that gets AAA titles still while iPhones usually don't.

    Be nice if Tim Apple splashed that dragons hoard of gold around on some bespoke exclusives that really used their modern A/M chips.
  • michael2k - Monday, October 4, 2021 - link

    What do you suggest? From my perspective they’re selling you six years of OS updates, while the article already mentions GI. GI already has PS specific content so maybe that would work?
  • tipoo - Monday, October 4, 2021 - link

    Apple funded exclusives that take fuller advantage of the hardware, which I already suggested
  • michael2k - Monday, October 4, 2021 - link

    Right, but what game do you think will encourage people to buy more iPhones? That’s the kind of question I imagine Apple has to consider. I specifically mentioned Genshin Impact.
  • Fulljack - Wednesday, October 6, 2021 - link

    why didn't I think of that? while A15 are currently the fastest smartphone chip, it won't be in the next 6 years, but at least it'll still perform well enough unlike the slug fest my 2 years old SD855 phone are.
  • Alistair - Monday, October 4, 2021 - link

    100 percent agree with you. I love the silicon performance, but the problem is iOS. I love Fantasian, but I can't even back up my save files, or transfer them at will between devices. iOS sucks. That's the issue. One of the reasons people love Steam. How come 20 years later only Steam has a proper backup and restore function. Epic game store, Windows store, they've had decades, some are new, they still lack basic functionality. That's how I feel about iOS.

    iOS save files? Nope. iOS external 16:9 display support? Nope. 120hz external display support? Nope. Can I easily downgrade from the iOS beta? Nope. Does Apple sign your files and make them incompatible after using the beta? Yes. It's just a long list of annoyances, iOS.

    Get that CLOSED OS into a gaming device, and I'm not as irritated. The Switch is closed off also. Except you can actually export your save files on the Switch, but not with iOS. LOL.
  • misan - Monday, October 4, 2021 - link

    That’s not really iOS problem though. The OS itself supports saving and exchanging files, but there is not much you can do if the developer of the game doesn’t support the file system.
  • Alistair - Monday, October 4, 2021 - link

    no, the OS does not support you transferring app data at all... it isn't the developer's fault
  • misan - Tuesday, October 5, 2021 - link

    Maybe I am misunderstanding something? I was under impression that you could use the FileManager APIs to write user-visible files. I see no reason why this couldn’t be used for saved game state.
  • Blark64 - Tuesday, October 5, 2021 - link

    Absolutely untrue. App data can be written to the user visible file system, or transferred via email, messaging, airdrop, third-party file storage like Dropbox, etc. etc.
  • Alistair - Tuesday, October 5, 2021 - link

    well none of my apps let that happen, so it's a bit weird to say iOS allows it but I can't do it, and why can't I just open a menu in settings and select the app data I want to transfer? saying "you can do it" is kind of misleading...
  • Lombo - Wednesday, October 6, 2021 - link

    I think that this is a PEBSAF (new version of the old PEBKAC bug)
    Please tell me what game are you referring to so I can test it and tell you how to do it.
  • cha0z_ - Tuesday, October 5, 2021 - link

    Depends how you use your phone. For general usage you will be hard pressed to find difference between 11 pro max and 13 pro max performance wise. Gaming is one area where even 13 pro max is just not enough powerful for handful of games and there are quite serious ios games like x-com 2 WOTC, Civ VI, northgard and so on (actually there are a lot of games that don't perform good enough on my 11 pro max!) + you can screen mirror to your TV and use the phone as a console via bluetooth gamepad - works and the lag is totally fine. Sound lags a little bit behind for something like dead cells, but it's totally OK for most games and the controls lag is super low to the point you won't notice it.
  • lemurbutton - Monday, October 4, 2021 - link

    Good game AMD/Intel when M2X comes out later this year.
  • supdawgwtfd - Wednesday, October 6, 2021 - link

    What?

    Your comment makes no sense.

    Apple won't "good game" anyone.

    You can't run massive servers, high end GPUs or anything in an Apple.

    They will continue to be what they always have been. Consumer devices.
  • adda - Saturday, October 9, 2021 - link

    This is one of the most hilariously random "Apple bad" comments I've ever seen.
  • Hifihedgehog - Monday, October 4, 2021 - link

    Interesting. So presumably then, the A12->A14 (iPad Air) and A12->A15 (iPad Mini) having the same percentage increases is due to severe downclocking on the iPad Mini, likely to achieve insanely long battery life? Because the results here casts a totally different light on the A14->A15 year-over-year performance improvements than what the Apple presentation and press materials would have led technically minded viewers to believe.
  • Transistor Fist - Monday, October 4, 2021 - link

    The urge to implement better cooling solutions is only matched by their impressive y-o-y improvements. If they continue like this, their systems will melt like butter in Cupertino summer.
  • michael2k - Monday, October 4, 2021 - link

    What are you talking about? Did you read the same article I read? The power/performance of the chips are superb and should allow the Macs, with their larger heat absorbing and dissipating systems, to stay amazingly cool. It’s unlikely Apple will release a 5GHz MBP with this chip.
  • cha0z_ - Tuesday, October 5, 2021 - link

    With that PCB design it will be hard to implement any cooling solution that will provide any real difference, especially with the limited space in their phones to even fit the said cooling. I don't really feel they are not doing more because of laziness or to milk people, I really do feel they test a lot and the current solution provides the best balance between power/speed/thermals.
  • name99 - Monday, October 4, 2021 - link

    I know you, reader, are not so shallow! But for everyone else out there who just wants a straight-out cage match result, the SPEC2017 numbers for Rocket Lake (same methodology) are here:

    https://www.anandtech.com/show/16535/intel-core-i7...

    It's unfortunate that those results don't show us how much extra AVX-512 (compiler created) gives, but presumably not enough that anyone thinks doing things that way is an outrage. But, quite possibly, most the action in compiler-generated AVX512 would be in the Fortran benchmarks anyway so...

    The other caveat in a comparison (apart from Andre's point re Fortran) is that this is the iPhone chip being benchmarked; one suspects the mac chip will, if nothing else, have a larger (shared) L2, perhaps also a larger SLC, perhaps (???) LP-DDR5, and probably another 10% or so higher GHz.
  • name99 - Monday, October 4, 2021 - link

    Someone shared with me some rough AVX512 results compared with AVX2, and there's nothing there. A few small improvements, mostly regressions (presumably frequency limiting).

    I was trying to be fair in the comment above, that possibly with AVX512 Intel would look a little better -- but it honestly doesn't seem that way; with compiler-generated AVX512 in fact Intel looks essentially the same to very slightly worse.
  • Spunjji - Friday, October 8, 2021 - link

    Apple are making the right choice moving to their own architectures.
  • TristanSDX - Monday, October 4, 2021 - link

    PLS, Alder Lake pre-release review
  • name99 - Monday, October 4, 2021 - link

    "we theorised that the company might possibly invested into energy efficiency rather than performance increases this year"

    Another way to phrase this which puts the emphasis slightly in a different place, is to assume that the A14 was in fact a rush job, and that in particular physical optimization for N5 was barely performed (hence both sub-optimal transistor usage, and sub-optimal density), and this year more such physical optimization could be performed.
    The Tech Insights high-res A15 die shots are now out, so people who like to do this sort of thing can get to pixel measuring and calculating relative sizes. My quick and dirty estimates (based on early numbers, not the current die shot) are that density increased about 7%. That hardly gets us to the maximum possible density TSMC suggests for N5, but does suggest that some fraction of the surprisingly low A14 density was simply lack of time to optimize.
  • mukiex - Monday, October 4, 2021 - link

    This is awesome! I was worried we wouldn't get an iPhone SoC review, but a review of JUST the SoC? I'm 100% on board with this being Anandtech's approach moving forward.
  • eastcoast_pete - Monday, October 4, 2021 - link

    Thanks Andrei! As a long-time Android user, one of the most frustrating aspects of current and future stock-ARM SoCs (currently, pretty much all Android phones) was the decision by ARM to keep the efficiency cores as in-order designs. Your tests of the current Apple SoC alongside the 888 and Exynos show just how much energy efficiency was left on the table with the A55. I know that ARM claims that their new efficiency core design is improved over the A55, but, as it remains in-order, I don't see how they can get even close to the efficiency cores in Apple's SoC.
    The simple truth is that being able to run mostly on the efficiency cores has great upsides for battery life. In this regard, I applaud Apple: have the high performance on the large cores when needed, keep the rest on the efficiency cores if possible without ruining the user experience.
  • michael2k - Monday, October 4, 2021 - link

    There's been work to document and improve on out of order vs in order energy efficiency (roughly a 150% energy consumption with a CG-OoO, and 270% with normal OoO) :
    https://dl.acm.org/doi/pdf/10.1145/3151034

    So there really is an energy efficiency benefit to 'in order'; out of order gives you a 73% performance boost but a 270% increase in energy consumption:

    Performance:
    https://zilles.cs.illinois.edu/papers/mcfarlin_asp...

    Power:
    https://stacks.stanford.edu/file/druid:bp863pb8596...

    In other words, if Apple had an in-order design it would use even less power, but as I understand it they have never had an in-order design. Make lemonade out of lemons as it were.
  • eastcoast_pete - Monday, October 4, 2021 - link

    I will take a look at the links you posted, but this is the first time I read about out-of-order execution being referred to as "lemon" vs. in-order. The out-of-order efficiency cores that Apple's SoC have had for a while now are generally a lot better on perf/W than in-order designs like the A55. And yes, a (much slower) in-order small core might consume less energy in absolute terms, but the performance per Watt is still significantly worse. Lastly, why would ARM design its big performance cores (A76/77/78, X2) as out-of-order, if in-order would make them so much more efficient?
  • jospoortvliet - Tuesday, October 5, 2021 - link

    Because they would be slower in absolute terms. In theory, all other things being equal, an in-order core should be more efficient than an out of order core. In practice, not everything is ever equal so just because apples small cores are so extremely efficient doesn't mean the theory is wrong.
  • michael2k - Wednesday, October 6, 2021 - link

    ARM can't hit the same performance using in-order, so if you need the performance you need to use an out-of-order design. In theory you could clock the in-order design faster, but the bottleneck isn't CPU performance but stalls when waiting on memory; with the out-of-order design the CPU can start working on a different chunk of instructions while still waiting on memory for the first chunk.

    It's essentially like having a left turn, forward, and right turn lane available, so that drivers can join different queues, vs all drivers forced to use a single lane for left, forward, and right. If the car turning left cannot move because of oncoming traffic, cars moving forward or right are blocked.

    As for your question regarding perf/W, you can see four different A55s all have the same perf but different W:
    https://www.anandtech.com/show/16983/the-apple-a15...

    This tells us that there is more to CPU energy use than the CPU, if you also have to include memory, memory controllers, storage controllers, and storage into the equation since the CPU needs to access/power all those things just to operate. The D1200 has better p/W across all it's cores despite being otherwise similar to the E2100 at the low and mid end (both have A55 and A78 but the E2100 uses far more power for those two cores)
  • Ppietra - Tuesday, October 5, 2021 - link

    the thing is Apple’s efficiency cores are clearly far more efficient than ARM’s efficiency cores in most workloads...
    Just because someone is not able to make an OoO design more efficient is not proof that any OoO will inevitably be less efficient
  • Andrei Frumusanu - Tuesday, October 5, 2021 - link

    Discussions and papers like these about energy just on a core basis are basically irrelevant because you don't have a core in isolation, you have it within a SoC with power delivery and DRAM. The efficiency of the core here can be mostly overshadowed by the rest of the system; the Apple cores results here are extremely efficient not only because of the cores but because the whole system is incredibly efficient.

    Look at how Samsung massacres their A55 results, that's not because their implementation of the A55 is bad, it's because everything surrounding it is bad, and they're driving memory controllers and what not uselessly at higher power even though the active cores can't make use of any of it. It creates a gigantic overhead that massively overshadows the A55 cores.
  • name99 - Thursday, October 7, 2021 - link

    You are correct but as always the devil is in the details.
    (a) What EXACTLY is the goal? In-order works if the goal is minimal energy at some MINIMAL level of performance. But if you require performance above a certain level, in-order simple can't deliver (or, more precisely, the contortions required plus the frequency needed make this a silly exercise).
    For microcontroller levels of performance, in-order is fine. You can boost it to two-wide and still do well; you can augment it with some degree of speculation and still do OK, but that's about it. ARM imagined small cores as doing a certain minimal level of work; Apple imagined them doing a lot more, and Apple appears to (once again) have skated closer to where the puck was heading, not where it was.

    (b) Now microcontrollers are not nothing. Apple has their own controller core, Chinook, that's used all over the place (their are dozens of them on an M1) controlling things like GPU, NPU, ISP, ... Chinook is AArch64, at least v8.3 (maybe updated). It may be an in-order core, two or even one-wide, no-one knows much about [what we do know is mainly what we can get from looking at the binary blobs of the firmware that runs on it].

    Would it make sense for Apple to have a programmer visible core that was between Chinook and Blizzard? For example give Apple Watch two small cores (like today) and two "tiny" cores to handle everything but the UI? Maybe? Maybe not if the core power is simply not very much compared to everything else (always-on stuff, radios, display, sensors, ...)?
    Or maybe give something like Airpods a tiny core?

    (c) Take numbers for energy and performance for IO vs OoO with a massive grain of salt. There have been various traditional ways of doing things for years; but Apple has up-ended so much of that tradition, figuring out ways to get the performance value of OoO without paying nearly as much of the energy cost as people assumed.
    It's not that Apple invented all this stuff from scratch, more that there was a pool of, say, 200 good ideas out there, but every paper assumed "baseline+my one good idea", only Apple saw the tremendous *synergy* available in combining them all in a single design.
    We can see this payoff in the way that Apple's small cores get so much performance at such low energy cost compared to ARM's in-order orthodoxy cores. Apple just isn't paying much of an energy price for all its smarts.
    And yes, the cost of all this is more transistors and more area. To which the only logical response is "so what? area and transistors are cheap!"
  • Nicon0s - Tuesday, October 5, 2021 - link

    >I don't see how they can get even close to the efficiency cores in Apple's SoC.

    Very simple actually. By optimized/modifying a Cortex A76.
    The latest A510 is very very small and this limit's the potential of such a core.
  • techconc - Monday, October 18, 2021 - link

    >Very simple actually. By optimized/modifying a Cortex A76.

    LOL... That would take a hell of a lot of "optimization" and a bit of magic maybe. The A15 efficiency cores match the A76 in performance at about 1/4 the power.
  • Raqia - Monday, October 4, 2021 - link

    Interesting that the extra core on the 13 Pro GPU doesn't seem to do add much performance much over the 13's 4 cores even when unthrottled, certainly not 20%. Perhaps the bottleneck has to do with memory bandwidth.
  • name99 - Monday, October 4, 2021 - link

    You see it for GPU compute, eg

    https://browser.geekbench.com/v5/compute/compare/3...

    Unclear why you get even BETTER than 25% in that case (these were not cherry picked results)
    Are there more differences than Apple has told us (like the Pro, ie 6GB, models, are using two DIMMs and have twice the bandwidth?)

    As for whether game results or Compute results better reflect the SoC, well...
    Obviously Apple is using all this GPU/NPU stuff in some places like computational photography, where people like it. The Siri image recognition stuff is definitely getting more valuable (I tried plant recognition this week and was pleasantly surprised, though the UI remains clumsy and sub-optimal). Likewise translation advances by fits and starts, though again hampered by lousy UI; likewise we'll see how well the Live Text stuff works (so far the one time I tried it, I was not impressed, but that was a very complex image so maybe I was hoping for too much).
    All these smarts are definitely valuable and, for many users, probably more valuable than a CPU 50% faster.

    On the other hand so many NPU-hooked up functions still seem so freaking dumb! Everyone hates the keyboard error correction stuff, things like choosing the appropriate contact when you have two with the same name seem to have zero intelligence behind them, I've even heard Maps Siri call a succession of streets of the form S Oak Ave "Sangrida Oak Ave". (N, W, E were correct. First time I had no idea what I heard so I listened carefully from that point on. All S were pronounced as something like Sangrida!)
    it's unclear (to me anywhere) where this NPU-adjacent dumbness comes from. Poorly trained models? Not enough NPU on my hardware, so I should go out and get new hardware? Different Apple groups (especially teams like Contacts and Reminders) using the NNU API's incorrectly because they have no in-team AI experience and are just guessing at what they are doing?
  • cha0z_ - Tuesday, October 5, 2021 - link

    Check the results again, it does provide decent uplift in peformance (peak), but apple decided to keep it at lower power figures in sustain performance and while doing so they achieve slightly higher performance vs the 4 core GPU. Instead of faster performance they decided to use the 5th GPU for lower power draw in thermally limited scenarios (sustained performance).
  • name99 - Monday, October 4, 2021 - link

    It's worth comparing the SPEC2017 results with https://www.anandtech.com/show/16252/mac-mini-appl... which gives the M1 results; the simple summary comparison hides a lot.

    In particular we can see that most of the int benchmarks are much the same; in other words not much apparent change in IPC, and now A15 matching M1's frequency. We do see a few minor M1 wins because it has a wider path to DRAM.
    The interesting cases are the massive jumps -- omnetpp and xalanc. What's with those?

    I'm not wild about the methodology in this paper:
    https://dl.acm.org/doi/pdf/10.1145/3446200
    but it does has a few interesting plots. Of particular relevance is Fig 4, which (look at the red triangles) gives us the working set size of the SPEC2017 programs.
    Omnetpp is characterized as 64MB, but with enough locality (or the SoC doing a good job of detecting streaming data and not caching it) the difference between the previous cache space available and the current cache space may explain most of the boost.

    The other big change is xalanc, and we see that its working set is right at 8MB. You could try to make an argument about caches, but I don't think that's right. Instead I'd urge you to compare the A15 result, the A14 result (which I am guessing, Andrei can confirm, was measured this run, using XCode 13), and the M1 result.
    The value for A14 xalanc (and the rather less interesting x264) are notably higher, like ~10..15% higher. This suggests a compiler (or, harder to imagine, an OS) change -- most likely something like one apparently small tweak in a loop that now allows a scalar loop to be vectorized, or (less likely, but not impossible) that restructures the direction of memory traversal.

    So I'd conclude that, in a way, we are ultimately back to where we were after the announcement and the first GB5 values!
    - performance essentially tracking the frequency improvement
    - for very particular pieces of code, which just happen to be larger than the pervious L2+SLC could capture, but which now fit into L2+SLC, a better than expected boost (only really relevant to omnetpp)
    - for other very particular pieces of code which just happen to match the pattern, a nice boost from latest XCode (but not limited to just this CPU/SoC)

    But no evidence of anything but the most minor IPC-relevant modifications to the P core. Energy mods, of course, always desirable, and probably necessary to make that frequency boost useful rather than a gimmick, but not IPC boosts.

    It would be interesting if those who track these things were to report anything significant in code gen by the newest XCode. Last time I looked at this stuff (not quite a year ago)
    - complex support was still in progress, with lousy use of the ARMv8 complex instructions (Some use, but far from optimal). I'd like to hope that's all fixed, but it seems unlikely to be relevant to xalanc.
    - there was ongoing talk of compiler level support for matrices (not just AMX, but support for various TPUs, and for various matrix instructions being added across ISA's). Again, interesting and hopefully having made progress, but not relevant here.
    - the usual never-ending "better support, clean up and restructure nested loops" and "better vectorized code", and those two seem the most likely candidates?
  • Andrei Frumusanu - Tuesday, October 5, 2021 - link

    Please avoid using the M1 numbers here, those were on macOS and on a different compiler version.

    Xalanc is memory allocator sensitive and that's the major contributable to the M1 and A14 differences as iOS is running some sort of aggregator allocator similar to jemalloc.

    The x264 differences are due to Xcode13 using a new LLVM 12 toolchain, Android NDKr23 had the same improvements, see : https://community.arm.com/developer/tools-software...
  • name99 - Tuesday, October 5, 2021 - link

    Thanks for the memory allocator detail!

    But basically the point remains -- everything converges on essentially the same IPC (modulo larger L2 and SLC); just substantially improved energy.

    Reason I went down this path was the *apparent* substantial jump between the M1 SPEC2017 numbers and the A15 numbers, which I wanted to resolve.
  • name99 - Monday, October 4, 2021 - link

    "This year’s CPU microarchitectures were a bit of a wildcard. Earlier this year, Arm had announced the new Armv9 ISA, predominantly defined by the new SVE2 SIMD instruction set, as well as the company’s new Cortex series CPU IP which employs the new architecture. Back in 2013, Apple was notorious for being the first on the market with an Armv8 CPU, the first 64-bit capable mobile design. Given that context, I had generally expected this year’s generation to introduce v9 as well, but however that doesn’t seem to be the case for the A15."

    One thing we all forgot, or overlooked, was the announcement earlier this year of SME (Scalable Matrix Extension) which along with the other stuff it does, adds a wrinkle to SVE via the addition of SVE/2 Streaming Mode.
    Is it possible that Apple has decided to (for the second time) delay implementing because these changes (addition of Streaming Mode and SME) change things sufficiently that you might as well design for them from the start?

    There's obviously value in learning-by-doing, even if you can't ship the final product you want.
    But there's also obvious value in trying to avoid fragmenting the ISA base as much as possible.
    Is it possible that Apple have concluded (having fixed the immediate problems with v8 aggressively every year) that going forward a better model is more something like an ISA update every 4 or so years (and so fairly clearly differentiated classes of compiler target) rather than annual updates? Starting with delivering an SVE/SME that's fully featured (at least as of mid 2021) rather than two successive versions of SVE, the first without SME and SVE streaming?

    ARM seems to have decided to hell with it, they're going to accept this ISA incompatibility and ship V1 with SVE, and N2 with SVE2-lite (ie no SME/streaming). Probably an acceptable choice given those are data center designs.

    In Apple's world, ideally finalization of code within the App Store down to the precise CPU of each customer would solve this issue. But Apple may have concluded some combination of the legal fights around the App Store and perhaps real-world difficulty of debugging by devs under these circumstances where they can never be sure quite what binary each user has installed, have rendered this infeasible?
    (Honestly I'd hope that the legal issues force things the other way, including forcing the App Store to provide more developer value by doing a much better job of constant app improvement -- both per-CPU finalization, and constant recompilation of older code with newer compilers, along with much better support for debugging help. Well, we'll see. Maybe, with the current rickety state of compiler trustworthiness, that vision is still too much to hope for?)
  • OreoCookie - Tuesday, October 5, 2021 - link

    I think you are spot-on: I don’t think there would have been a similarly large payoff as compared with going from 32 bit to 64 bit. Given all the external parameters, pandemic, staff leaving, going with a tock cycle is a prudent choice. Especially if Apple not only undersold the improvements, but could have genuinely made more of a deal about them focussing on efficiency with this release. Given how much faster they are than their competition, I think focussing on efficiency is a good thing.

    Further, *if* Apple had decided on adopting a new instruction set, I would have expected to traces of that in the toolchain, e. g. in llvm.
  • name99 - Tuesday, October 5, 2021 - link

    Yeah, the one thing one sees in the toolchain (eg Andrei's link above) https://community.arm.com/developer/tools-software...
    is just how immature SVE compiling still is.

    I don't want to complain about that -- compilers are HARD! But releasing HW on the hope that the compiler will get there is a tough sell.
    On the one hand, yes, it is harder for compiler devs 9and everyone else, like those who write specialized optimized assembly) to make progress without HW.
    On the other hand, you only get one chance to make a first impression, and if you blow it with a fragmented ISA, a poor implementation, or unimpressive performance (*cough* AVX512 *cough*) it's hard to recover from that.
    I guess Apple see little downside in having ARM bear the costs of being the pioneer this time round.
  • OreoCookie - Thursday, October 7, 2021 - link

    Yes, the maturity of the toolchain is another major factor: part of Apple’s secret sauce is the tight integration of software and hardware. Its SoCs are designed to accelerate e. g. JavaScript and reference counting (https://twitter.com/Catfish_Man/status/13262384342...

    Another thing is that at least for some of the new capabilities that SVE brings are probably — at least in part — covered by other specialized hardware on Apple’s SoCs.

    PS AVX512 pipelines are also massively power hungry, so that’s another trade-off to consider.
  • williwgtr - Tuesday, October 5, 2021 - link

    It may be faster but what good is that if you want to play 20 minutes you can not low FPS, the CPU setting is aggressive to prevent it from getting hot
  • Zerrohero - Tuesday, October 5, 2021 - link

    I don’t know but it’s still better than anything else out there?
  • LiverpoolFC5903 - Tuesday, October 5, 2021 - link

    All that power, but what is the point? Its crippled by iOS and you can only do so much on an iphone. Can you run emulators? Can you attach standard game controllers/peripherals like you can do on android? Can you copy and paste media files from your PC into your Iphone without going through a convoluted process?

    If this hardware was available to android phone manufacturers, you could actually see the potential of these chips.

    I believe Apple can mint money by selling their SOCs to Android smartphone manufacturers. It doesnt have to be the latest one, they could offer last years SOCs at premium prices for high end android devices. Imagine running Dolphin on a A14 or A15 powered android phone!
  • Zerrohero - Tuesday, October 5, 2021 - link

    You have never used an iPhone.

    So how do you know this HW is “crippled by iOS”?

    Apple itself is using this power for many very nice consumer facing features, like computational photography. Faster and more efficient prosessing, yes please.

    What are those amazing power user use cases that Android allows and iOS doesn’t?

    Android OEMs have shown year after year that they can’t unleash the potential of anything because they don’t own any of the relevant parts (chips, SW) and they don’t understand product design.

    And yes, PlayStation and Xbox controllers work just fine in iPhone and iPad.
  • LiverpoolFC5903 - Tuesday, October 5, 2021 - link

    Do what exactly? Browse the web? Listen to music? Do social media?

    I can connect controllers, mice, keyboards, external hardrives, pendrives and others via USB OTG. I can install any emulator I like from any source I want and not just play store. I can download apps from anywhere, store them and send them to other phones. I can simply copy and paste my music collection in my desktop to my phone as opposed to going through a rigamarole. I can root, install custom roms of any shape and form I like. I can completely alter the way my phone looks and functions. I can install browsers like Firefox with different engines as opposed Webkit based browsers ONLY, all of which are simply clones.

    I can do so much more with my phone, you cannot possibly fathom as an iphone user.

    An android phone with APple SOC would be a billion times better than an IOS phone with the same hardware in terms of overall functionality.
  • Nozuka - Tuesday, October 5, 2021 - link

    All the things you listed will feel like a waste of time to most users, tbh. ;) (And can also cause a lot of problems)
    I used to like these things too in the earlier android years and i get how it can be fun. but now i just can't be bothered to spend the time. I just need a reliable and fast phone that does the most important tasks well and gets updates for a long time and stays fast. The A15 will be plenty fast for years.
    And if i ever get a new device, i just want to restore without any hassle. Or if a new OS version arrives, i want to install it without fear that any of the customizations will be broken.
    IMHO iOS still provides the most hassle free experience. And if the masses are missing some crucial feature it usually gets added.

    But if you like to tinker around, then iOS devices are definitely not the right devices for you.

    " I can simply copy and paste my music collection in my desktop to my phone as opposed to going through a rigamarole. "
    I would argue that this is way more tedious than just adding the music to your library once and then it is available on all your devices.
  • dontlistentome - Tuesday, October 5, 2021 - link

    Hey Siri/Google. Show me two people with opposing use-cases. :-)

    I call a draw (I can be judge, I have a 13 pro and a Pixel 5. Like both).
  • Spunjji - Friday, October 8, 2021 - link

    Your comment speaks to my own experiences. I used to be big into Android customisation in the early days, but I gave up around about Android 5 / Lollipop when the core OS included sufficient features to be satisfying. I got extremely tired of screwing around with hacking in custom software, and my experiences with the numerous things you can *theoretically* do with an Android device were - generally speaking - poor, and not worth the effort.
  • michael2k - Wednesday, October 6, 2021 - link

    6+ years of OS updates? And before you laugh, my sister in law and mother in law both rock iPhone 6S models getting iOS 15

    Also you seem to be under the mistaken assumption that iOS doesn't support USB drives:
    https://www.amazon.com/SanDisk-iXpand-Flash-Drive-...

    External keyboards:
    https://www.amazon.com/Omars-Certified-Plug-n-Go-L...

    USB OTG:
    https://www.amazon.com/Adapter-Compatible-Portable...

    You are correct that Apple hasn't unlocked a lot of the iPhone's potential; but it's also correct that no one in Android space is willing to pay the premium necessary to come close to the iPhone's processor either. There just aren't enough people like you willing to pay enough to pay Qualcomm the extra cost of developing a faster and more powerful CPU
  • Nicon0s - Tuesday, October 5, 2021 - link

    The same tired "you have never used an iphone argument" actually showing lack of arguments.

    The road for computational photography was paved by Android smaprhones not Apple.
    Computational photography on the Pixel 4a with the very old SD 730 is better than on an iphone SE 2020 for example.

    >What are those amazing power user use cases that Android allows and iOS doesn’t?

    Well it looks like you are quite unfamiliar with Android.
    Anyway a simple example is Dex and similar implementations on other phones.
    Another examples is being able to use emulators and turning an Android phone into a mini console.
  • Aq901_22 - Tuesday, October 5, 2021 - link

    > Computational photography on the Pixel 4a with the very old SD 730 is better than on an iphone SE 2020 for example.

    A key differences is that the SE 2020 does computational photography/videography in real time, which necessitates a decently powerful professor to execute those tasks? The Pixel 4a doesn’t have Live HDR in preview/during recording when recording videos (only in stills), nor does it have real-time Portrait Mode/bokeh control simultaneously with Live HDR nor something like Portrait Lighting control before taking a pic? The point is, the Pixel 4a has impressive computational features (like night mode, which the SE lacks) for its price.

    But it’s downsides is that everything is done in post (minus HDR), and the Pixel 4a is notorious for having slower processing compared to its predecessors. So while the 4a is better than the SE 2 in stills (in low light specifically), the SE 2 has much better videography due to Apple’s obsession with doing everything in real time. And this doesn’t take into factor that the SE is also better in slow-motion, panoramas, time-lapse due to using its computational features and implementing it across the board.

    The 4a is great for the price and despite using a much slower processor, it has a pretty good camera. But this also makes it have disadvantages—and this is shown across the Pixel lineup, including the 5. I say this as a huge Pixel fan and owning one.
  • techconc - Tuesday, October 5, 2021 - link

    Agreed. Google did some early pioneering work with computational photography. However, unlike you, I don’t think most Android users understand just how far Apple has pushed in these areas, especially with regard to real time previews that require more processing power than is available on Android devices. This year’s “cinema mode” is just another example of that.

    Apple focuses on features and then designs silicon around that. Most others see what’s available in silicon and then decide which features they can add.
  • Nicon0s - Saturday, October 16, 2021 - link

    >I don’t think most Android users understand just how far Apple has pushed in these areas, especially with regard to real time previews that require more processing power than is available on Android devices.

    I don't think you understand what you are taking about. Real time preview was implemented on Pixel 4 with the old Snapdragon 855. You are just trying to make it seem a much bigger deal that it is.
    What's Apple has pushed for is to match camera software features implemented by Google and other Android manufacturers.
  • techconc - Monday, October 18, 2021 - link

    Yeah, YEARS after iPhones have had this feature because Android phones have been anemic by comparison in terms of processing capabilities. The same with Apple adding this feature for video via Cinema mode. The point being, you're attempting to make it sound as if Android has completely led and pioneered computational photography and that's not true. Google has led in some areas, Apple has led in others. If you think computational photography is an area where Android devices currently lead, then don't really know what you're talking about.
  • Nicon0s - Tuesday, October 19, 2021 - link

    "Yeah, YEARS after iPhones have had this feature because Android phones have been anemic by comparison in terms of processing capabilities. "

    That's only what you think. That live preview is mostly dependent on the ISP anyway, which is the one doing the processing.

    "The same with Apple adding this feature for video via Cinema mode."

    A boring, pointless feature most won't use.

    "The point being, you're attempting to make it sound as if Android has completely led and pioneered computational photography and that's not true. "

    It is true. The advancements in terms of computational photography that we get with modern smartphones today were lead by Android manufacturers, Apple only followed. I still remember how apple fanboys all over the interned claimed that Google faked the iphone photo when they introduced Night Sight with Pixel 3. Night Sight was better than it seemed possible changing the paradigm when taking photos in low light.
    You want to see another slew of new photo features, take a look at the Pixel 6 announcement. While apple introduced what? Fake video blur? LoL

    " If you think computational photography is an area where Android devices currently lead, then don't really know what you're talking about."

    Actually I'm the only one that knows what hes talking about.
  • Nicon0s - Saturday, October 16, 2021 - link

    >A key differences is that the SE 2020 does computational photography/videography in real time, which necessitates a decently powerful professor to execute those tasks? The Pixel 4a doesn’t have Live HDR in preview/during recording when recording videos (only in stills), nor does it have real-time Portrait Mode/bokeh control simultaneously with Live HDR nor something like Portrait Lighting control before taking a pic?

    What's the most important is the results.
    Also I'm pretty sure the 4a can approximate the HDR results in real time in the viewfinder, which is not really a big deal. I've seen it on other mid-range Androids as well.
    The idea is you can have very decent computational photography even on slower phones in terms of CPU and GPU while Apple does intentionally cripple the capabilities of some of their phones, like lack of night mode on the SE, heck even on the iphone X night, mode should be possible no problem

    >The 4a is great for the price and despite using a much slower processor, it has a pretty good camera. But this also makes it have disadvantages—and this is shown across the Pixel lineup, including the 5.

    Honestly I don't see any disadvantages because of the performance vs pretty much any phone around it's price range so including the SE.
  • techconc - Monday, October 18, 2021 - link

    >What's the most important is the results.

    Yeah, and seeing live previews helps with a photographer's composition and actually achieve those results. Without proper live previews, better results are more a matter of luck than skill.
  • Nicon0s - Tuesday, October 19, 2021 - link

    "Yeah, and seeing live previews helps with a photographer's composition and actually achieve those results. Without proper live previews, better results are more a matter of luck than skill."

    Nonsense, you don't really understand photography. Like I've said what matter are the result. If I point my phone at the same subject and don't get an "approximated HDR result" in the live preview doesn't mean I'm going to take a worse photo or that I generally take worse photos.
  • Blark64 - Monday, October 11, 2021 - link

    >The road for computational photography was paved by Android smaprhones not Apple.
    Computational photography on the Pixel 4a with the very old SD 730 is better than on an iphone SE 2020 for example.

    Your historic perspective on computational photography is, well, shortsighted. Computational photography as a discipline is decades old (emerging from the fields of computer vision and digital imaging), and I was using computational photography apps on my iPhone 4 in 2010.
  • Nicon0s - Saturday, October 16, 2021 - link

    We are taking about modern phones and modern solutions not the start of computational photography. Apple's camera software evolved as a reaction to the excelent camera features implemented in Android phones. It's not your iphone 4 that made computational photography popular and desirable it's Android manufacturers
  • Nicon0s - Tuesday, October 5, 2021 - link

    >I believe Apple can mint money by selling their SOCs to Android smartphone manufacturers.

    I would really like to see that but more for the cost perspective.
    Things to consider: it doesn't have a model so additional cost.
    Need for hardware support as it's a new platform, support for developing the motherboard.
    Need for support for software optimisations/ camera optimisations etc.
    Need for support for drivers, when OEMs buy an SOC they buy it with driver support for a certain amount of years and this influences the final price.

    All in all an A15 would probably cost an Android OEM a few times more than a Qualcomm SOC. So the real question is: would it be worth it?
  • Nicon0s - Tuesday, October 5, 2021 - link

    ... modem
  • LiverpoolFC5903 - Tuesday, October 5, 2021 - link

    I understand that an apple SOC will probably cost 2-3 times at least considering the points you mentioned, but will it be a deterrent for OEMs? High end smartphones from the likes of Samsung and Sony are north of 1200 gbp, so they are pretty expensive anyway. Would a couple hundred quid added to the BOM and passed on the customer dent the overall sales of these top end smartphones? A customer who is paying 1400 gbp for the latest Note phone will probably be ok paying 1700 for a phone thats almost twice as fast. I personally would be happy to pay such prices for a high end droid with an A15 or even A14 soc , more so than paying that amount for a Z flip or something.

    Imagine the immense potential of such a device. One can dream.

    To summarise, i dont believe cost will be a deterrent for OEMs, especially as customers are clearly willing to pay for cutting edge tech, as indicated by steady smartphone sales year on year despite a steep increase in prices of premium phones over the last 10 years.
  • michael2k - Wednesday, October 6, 2021 - link

    I don't see how you could be close to correct.
    Worldwide Apple has 15% marketshare; lets pretend that of the remaining 85%, another 15% really would pay the extra an OEM would charge for the performance Apple offers.

    In other words, a good chunk of Samsung's user base (since they have 18% globally) would be willing to pay a premium for iPhone level performance, even if it's a one year old SoC instead of the current year SoC.

    The iPhone 11 Pro Max was estimated to have a $64 SoC; if Apple is to profit from selling it let us assume they charge $50, so that Apple sells each chip for $114:
    https://www.techinsights.com/blog/apple-iphone-11-...

    Qualcomm theoretically charges Samsung $57:
    https://www.gsmarena.com/samsung_galaxy_note20_ult...

    So if that means they can sell 200m chips at $114 and a $50 profit, they get $10b in profit a year and $22b in revenue from chip sales. That's healthy, and relevant since their 2020 revenue was $274b!

    However, if they only got $5 per chip, their profit would only grow $1b and their revenue only by $2.2b, so clearly it wouldn't make sense to sell their chips for only $70. The question then is do you really think they could sell 220 million chips a year for $114?

    And don't forget that this isn't going to be free for them, since they have to provide the drivers and basic supporting hardware for OEMs to use (essentially a reference design).

    To compare to their other revenue streams, their smallest revenue stream right now is the iPad at roughly $30b a year:
    https://www.apple.com/newsroom/pdfs/FY21_Q3_Consol...

    So you have to expect Apple to aim for something comparable if they were to sell SoC (hence my original assumption of $50 profit per chip)
  • Speedfriend - Thursday, October 7, 2021 - link

    For what most people use a phone for, the difference in speed is unnoticeable. I have had both an iPhone and Android as daily phones for at least 5 years and never felt one was noticeably faster than the other
  • Nicon0s - Thursday, October 7, 2021 - link

    >The question then is do you really think they could sell 220 million chips a year for $114?

    Impossible. The couldn't even get 200 million chips to sell.
    Also in order to have a pure 50$ in profits the price for the SOC would have to be bigger than 114$. Apple also has to offer support for these chips, drivers and so on. Those are additional costs, some of them long term.
  • Ppietra - Thursday, October 7, 2021 - link

    Nicon0s,
    Apple already "produces" around 250 million chips for itself, most of them usually at the most recent node process, leaving vacant most of the older production capacity... So it would be possible for Apple to produce extra 100-200 million older chips if it wanted to.
  • Nicon0s - Sunday, October 10, 2021 - link

    You are missing a few important details. It takes apple 1 year or more to get their hands on over 200 million chips. It would be hard to double the production output at any given time in order to sell chips to other companies. Also the older production capacity is never vacant and it's always booked in advance.
    Also Apple wouldn't be able to sell an A13 at such a premium vs the latest Android SOCs. Maybe the SD898, exynos 2200 or dimensity 2000 won't be top in performance but they will sure offer better performance than an A13 SOC so I was obviously talking about the A15 in my comment.
  • Ppietra - Sunday, October 10, 2021 - link

    don’t see what I am missing!
    the 200 million you commented about already represented 1 year production, that means what would be produced would be dispersed throughout the year.
    Changing business models doesn’t happen overnight, so no one was talking about Apple selling 200 million by next year specifically. No one here was assuming nor can say that Apple would change things with no pre-established plan... that would be absolute nonsense.
    What people are talking about is Apple establishing a business where it would sell 200 million a year and reap profits from it - it’s an hypothetical.
    But I could put it in another perspective if Apple were planning to sell old SoCs by next year then it would already have planned its contracts to produce more.
  • Nicon0s - Saturday, October 16, 2021 - link

    At least follow the conversion if you want to join in. He was specifically talking about selling 200 million chips to other companies and I specifically answered to that. The quote and my answer are in the same place.
    Also the 200 million is obviously in addition to Apple's own chips which they use for their products.

    >But I could put it in another perspective if Apple were planning to sell old SoCs by next year then it would already have planned its contracts to produce more.

    Even if it did volume would be much lower as most of the capacity will already be booked and those chips would still be quite expensive and obviously less competitive. The only way apple could get more is if they outbid. Realistically TSMC would not simply turn their back ro their other costumer just to only sell to Apple.
  • Ppietra - Saturday, October 16, 2021 - link

    Nicon0s, It’s Apple that is already using that near 200 million manufacturing capacity. Once a new year comes it will be adding more manufacturing capacity with a new node... so no, there is no problem with volume, if it planned for it, Apple would be able to have those contracts because it would already be in the position where it was using that capacity. Apple already outbid others to be the first using it.
  • Ppietra - Thursday, October 7, 2021 - link

    and support costs would be almost nothing compared with the extra revenue. Certainly you don’t expect billions of dollars in expenses just to keep drivers updated
  • michael2k - Thursday, October 7, 2021 - link

    You're assuming there exists 200 million customers willing to pay an extra $100 or so for Android smartphones. The cost isn't solely going to go to Apple, if Samsung used a premium part they're going to want to profit from it too!

    ASP for Android phones is $261 or so, which means the vast majority of Android phones will be cheaper too:
    https://www.statista.com/statistics/951537/worldwi...
  • Nicon0s - Thursday, October 7, 2021 - link

    >but will it be a deterrent for OEMs?

    What type of question is that? Prices are extremely important especially when we talk about products that are expensive from the start, being more expensive would definitely be a problem.

    >Would a couple hundred quid added to the BOM and passed on the customer dent the overall sales of these top end smartphones?

    Yes it definitely would.

    >will probably be ok paying 1700 for a phone that almost twice as fast.

    Not really twice as fast and the advantage would mostly be visible in benchmarks. In this case I would definitely buy the same phone with a Qualcomm SOC at a cheaper price. It's not like the phone with Snapdragon SOCs can't handle the OS, photo processing and so on.

    >To summarise, i dont believe cost will be a deterrent for OEMs

    Taking in consideration how sensible to prices they are they would definitely be discouraged.
  • techconc - Tuesday, October 5, 2021 - link

    “ All in all an A15 would probably cost an Android OEM a few times more than a Qualcomm SOC. So the real question is: would it be worth it?”

    Probably not. Most Android users don’t buy flagship level devices. Developers typically develop for the lowest common denominator. I suspect most of the benefits would go unused.
  • Jetcat3 - Tuesday, October 5, 2021 - link

    Thanks so much for this! It’s great to see Apple focus on improving efficiency across the board.

    I literally can’t wait to see the display and battery analysis as I’ve noticed much better touch sensitivity with the move to Y-Octa AMOLED panels with the 13 Pro specifically.
  • easp - Tuesday, October 5, 2021 - link

    I'm interested to see how this all plays out in their Desktop-class variants.
  • zodiacfml - Tuesday, October 5, 2021 - link

    No fan of Apple but this is just one of the reasons Android devices should not charge the same price as Apple. The SoC has plenty of potential only for Apple to power and tdp limit it on the iPhone, check it yourself iPad Mini testing videos on youtube.
  • theblitz707 - Wednesday, October 6, 2021 - link

    We dont need to make conclusions about socs when games are tested. Games are just games. And i bet most people looking at these sustained figures, efficiency figures etc. just try to understand how better it will be in games. So you could include a few well known games. Your audience would grow a lot too.
  • LiverpoolFC5903 - Wednesday, October 6, 2021 - link

    I think the issue is, you cannot reasonably draw a conclusion given the variables involved. For example, two devices may be able to run genshin impact at 60 fps, but are the visuals of the same quality?

    Emulators would be a good way to do an "apples vs apples" comparison, but then you cannot install emulators on iphones.
  • six_tymes - Wednesday, October 6, 2021 - link

    maybe I missed it, but where in this article does it say whether the A15 is v8 or v9 based? I am yet to find that information. Does anyone know, and have sourcing?
  • name99 - Wednesday, October 6, 2021 - link

    v8 based.
    Essentially ARMv8.5 minus BTI.
  • name99 - Wednesday, October 6, 2021 - link

    https://community.arm.com/developer/ip-products/pr...
    lists what's new in 8.5

    Wiki still says A15 is essentially 8.4, but A14 is generally described as above, eg
    https://twitter.com/never_released/status/13610248...

    On the other hand, no-one has seen evidence of MTE usage in iOS (either iOS14 or 15). Which may reflect non-presence, or that compiler support isn't yet there?

    Mostly 8.5 is technical stuff that would be hard to test.
    One possibility would be the random number instructions. Maybe we'll get clarification of these over the next month?
  • name99 - Wednesday, October 6, 2021 - link

    We can see a little more detail here:
    https://github.com/llvm/llvm-project/blob/main/llv...

    We see that, among other things, A14 added
    - cache clean to deep persistence (basically instructions to support non-volatile-ram...)
    - security stuff to invalidate predictions
    - speculation barrier
    - and a few other (uninteresting to me) 8.5 security features

    Interestingly it also claims, on the performance side, to have added to A14 over A13, fusion of literal generation instructions, something I did not see when I tried to test for it -- presumably you have to get the order of the literal instructions correct, and I used the incorrect order in my quick tests?
    Along with claims of a number of other instruction fusion patterns that I want to test at some point!

    This was added in late Jan 2021, which suggests we won't see the equivalent for A15 until beginning of next year :-(
  • OreoCookie - Thursday, October 7, 2021 - link

    My understanding is that ARM v9 essentially mandates parts of the ARM v8.x ISA that were optional and introduces SVE2. If I read your posts correctly (thanks for doing the checking, much appreciated), then it seems that Apple has implemented the “first half” of ARM v9 anyway and the only notable omission is SVE2.

    SVE2 sure sounds like a nice-to-have, but like you wrote the compiler will play a crucial role here. I reckon a proper implementation will eat up quite a bit of die area, and if you are not going to use it, what is the point?
  • RoyceTrentRolls - Wednesday, October 6, 2021 - link

    Hear me out:

    A13 - 14 cycles 8MB 2 cores/big cluster
    M1 - 16 cycles 12MB 4 cores
    M1X/M2? - 18 cycles 16MB 8 cores

    🤪
  • name99 - Wednesday, October 6, 2021 - link

    It's a reasonable hypothesis BUT a big problem with cores sharing an L2 is that they all have to sit on the same frequency plane. (They can have different voltages, which matters if one of them is eg engaged in heavy NEON work, while another is doing light integer work; but they must share frequencies.)

    This may be considered less of a problem for the target machines?
    Alternatively you just accept that life ain't perfect and provide two clusters of 4core+(?12?16MB L2)?
  • OreoCookie - Thursday, October 7, 2021 - link

    With 2+4 core SoCs, I don’t think this is that big of an issue, though. Of course, it gets trickier once you scale up to more than 8 performance cores, but we will have to see what Apple’s solution is here anyway (8-core chiplets perhaps?).

    Overall, though, it seems that massively increasing caches is a common trend, AMD has been going in that direction (including their Zen 3 with additional cache slated for later this year) and IBM will be using massive caches on their new CPUs that will power their Z15 mainframes. The drawbacks are pretty clear, but the potential upside is, too.
  • mixmaxmix - Thursday, October 7, 2021 - link

    battery life test result please
  • mixmaxmix - Saturday, October 9, 2021 - link

    please
  • Raqia - Thursday, October 7, 2021 - link

    Die shot now available:

    https://semianalysis.com/apple-a15-die-shot-and-an...

    More caches all around, and the GPU doubles the number of FP32 ALUs without adding much die area.
  • MobiusPizza - Thursday, October 7, 2021 - link

    Genshin Impact screenshot with 5 star characters Baal and Ganyu is stealth bragging by AnandTech.
  • GC2:CS - Sunday, October 10, 2021 - link

    I did not notice that.

    That has to be some serious luck by andrei…
  • hmdqyg - Friday, October 8, 2021 - link

    In 2024, the Apple A18 might be the first phone SoC to conquer Genshin Impact.
  • GC2:CS - Sunday, October 10, 2021 - link

    The game is nice and really heavy. But the problem is like it was written above that real graphics fidelity is all over the place.

    If I remember last time M1 iPads got a higher resolution update that comfortably prevents it from hitting 60 not even mentioning native 120 fps.

Log in

Don't have an account? Sign up now