We really don't know anything about whether h]this customer is Apple or not. In fact, there is a strong possibility that this will be the last Imagination IP Apple uses, not the previous one. If it's already rolling off, it's possible we'll see this. That one to two years schedule Apple gave them is still a ways off.
For all we know, Apple is using some of in this years SoC. I wouldn't be surprised.
Judging by Imagination's history it will be over a year from now, perhaps 2. From that standpoint alone it is possible Apple will use it. However, wouldn't it be safe to assume that if this customer is using 2 clusters, it's probably not Apple? Imagination has Rogue-based GPUs (Series 8XE and Series 8XE Plus) newer than what Apple used in the iPhone 7. My guess is that it is these GPUs that Apple has plans to use over the next one to two years.
The first sentence was meant to say "Judging by Imagination's history it will be over a year from now until this Furion-based GPU is released in a product."
I doubt it. I think it's not that they are taking a long time to get to market, it's that they announce their upcoming products an extremely long time in advance. Plus, I have read reports that Apple is hiring away some of their GPU engineers.
That's because what they're making is the design. They aren't producing Apple's chips, for example. They give them access to the latest design, Apple integrates it into their SoC, and has it fabbed. Obviously that's a gross simplification, but any SoC designer that uses their design will take a while to get it into products. With that being said, it may be possible for them to work more closely with these vendors to get product out a little bit faster.
Albeit a 2 cluster design doesn't exactly sound like "high-end mobile" to me in 2018 - midrange yes, highend no. Even the 2017 MediaTek Helio X30 is using a four cluster Rogue (and although it has rather high clocks, the gpu is unlikely to keep up with other current high-end chips), so despite clusters being faster, this would be a step back.
Average Joes don't look out for the cluster size of the GPU part, but for the number of cores on the CPU side. That, and this is only the first Furian design, others with 4 or more will surely follow soon enough.
I do take issue with them saying high-end mobile, because that's where the 2-cluster configuration will be when it ships. Furian should offer improved performance for the low to mid-range, while keeping die area down. With that being said, with higher clocks a 2-cluster Furian should match the series 7 quad cluster in the X30. Either way, I know they're going for volume first but I am most interested in the 4+ cluster Furians.
If they offer a variant that is DX12 compatible, I'd like a 6+ cluster model in a high-end ARM SoC running Windows on ARM on a tablet. Like a non-Pro Surface class device.
Apple announced they will stop using PowerVR within 2 years. It's expected that either the next iteration of their chips or the one after will be using an internally developed GPU.
This was all over the financial news because Apple accounts for 75% of the revenue of Imagination. If Apple does switch as they've said and this is 2 years out it likely will never see the light of day when Imagination goes bankrupt before they can develop it.
I wonder what's going to happen to Imagination's ray tracing acceleration efforts. I know OTOY has shown interest in it. Although, OTOY promised to have CUDA cross-compiling support for non-CUDA architectures in OctaneRender by now, which is something they'd need in order to use PowerVR GPUs, and they haven't released it, yet. I suppose there's a chance they might skip OctaneRender 3.1 completely and move straight to 4, releasing the cross-compiling support at that time.
SoC CPU/GPU performance has reached a point where it is almost purely academic: The underlying economics of the software running on them simply doesn't demand much performance out of even budget SoCs. Doesn't matter whether its Android or iOS either: It's either targeting the lowest common denominator hardware for maximum ad-eyeballs or bust.
No you're all simoly just clueless about the state of mobile gaming. If anything, the GPU's are still way underpowered for the demands they are facing. I'll just go by examples - in order to provide 60fps on native high end smartphone and tablet resolution(1400p+) games such as CSR 2, Dawn of Titans, Legacy of Discord, Marvel Contest of Champions ,Modern Combat versus, Transformers Forget to fight, Battle Bay, NBA 2k 17...need AT LEAST 200-400 GFLOPS of raw GPU power. Now consider that mobile SoC's typically throttle to about 60% of their theoretic performance over the sustained gaming, and do the math.
Fun fact- ALL the games mentioned above play at around 30fps on native resolution even on current highest end hardware (iPad Pro)
It was released, in the way they release all their designs. Now if you're asking if a SoC manufacturer ever licensed and integrated Wizard into a product, I don't believe so. Without a major API vendor (Khronos, MS, Apple) supporting it, it's a niche feature.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
23 Comments
Back to Article
melgross - Wednesday, May 10, 2017 - link
We really don't know anything about whether h]this customer is Apple or not. In fact, there is a strong possibility that this will be the last Imagination IP Apple uses, not the previous one. If it's already rolling off, it's possible we'll see this. That one to two years schedule Apple gave them is still a ways off.For all we know, Apple is using some of in this years SoC. I wouldn't be surprised.
Yojimbo - Wednesday, May 10, 2017 - link
Judging by Imagination's history it will be over a year from now, perhaps 2. From that standpoint alone it is possible Apple will use it. However, wouldn't it be safe to assume that if this customer is using 2 clusters, it's probably not Apple? Imagination has Rogue-based GPUs (Series 8XE and Series 8XE Plus) newer than what Apple used in the iPhone 7. My guess is that it is these GPUs that Apple has plans to use over the next one to two years.Yojimbo - Wednesday, May 10, 2017 - link
The first sentence was meant to say "Judging by Imagination's history it will be over a year from now until this Furion-based GPU is released in a product."SydneyBlue120d - Thursday, May 11, 2017 - link
I bet time to market will be greatly improved with the new focus on GPU only...Yojimbo - Thursday, May 11, 2017 - link
I doubt it. I think it's not that they are taking a long time to get to market, it's that they announce their upcoming products an extremely long time in advance. Plus, I have read reports that Apple is hiring away some of their GPU engineers.Alexvrb - Thursday, May 11, 2017 - link
That's because what they're making is the design. They aren't producing Apple's chips, for example. They give them access to the latest design, Apple integrates it into their SoC, and has it fabbed. Obviously that's a gross simplification, but any SoC designer that uses their design will take a while to get it into products. With that being said, it may be possible for them to work more closely with these vendors to get product out a little bit faster.vladx - Wednesday, May 10, 2017 - link
Yeah it's most likely either MediaTek or Xiaomi.mczak - Wednesday, May 10, 2017 - link
Albeit a 2 cluster design doesn't exactly sound like "high-end mobile" to me in 2018 - midrange yes, highend no. Even the 2017 MediaTek Helio X30 is using a four cluster Rogue (and although it has rather high clocks, the gpu is unlikely to keep up with other current high-end chips), so despite clusters being faster, this would be a step back.vladx - Wednesday, May 10, 2017 - link
Average Joes don't look out for the cluster size of the GPU part, but for the number of cores on the CPU side. That, and this is only the first Furian design, others with 4 or more will surely follow soon enough.Alexvrb - Wednesday, May 10, 2017 - link
I do take issue with them saying high-end mobile, because that's where the 2-cluster configuration will be when it ships. Furian should offer improved performance for the low to mid-range, while keeping die area down. With that being said, with higher clocks a 2-cluster Furian should match the series 7 quad cluster in the X30. Either way, I know they're going for volume first but I am most interested in the 4+ cluster Furians.If they offer a variant that is DX12 compatible, I'd like a 6+ cluster model in a high-end ARM SoC running Windows on ARM on a tablet. Like a non-Pro Surface class device.
Alexvrb - Wednesday, May 10, 2017 - link
that's not where*I usually try to proofread. Sigh.
rahvin - Wednesday, May 10, 2017 - link
Apple announced they will stop using PowerVR within 2 years. It's expected that either the next iteration of their chips or the one after will be using an internally developed GPU.This was all over the financial news because Apple accounts for 75% of the revenue of Imagination. If Apple does switch as they've said and this is 2 years out it likely will never see the light of day when Imagination goes bankrupt before they can develop it.
Yojimbo - Wednesday, May 10, 2017 - link
I wonder what's going to happen to Imagination's ray tracing acceleration efforts. I know OTOY has shown interest in it. Although, OTOY promised to have CUDA cross-compiling support for non-CUDA architectures in OctaneRender by now, which is something they'd need in order to use PowerVR GPUs, and they haven't released it, yet. I suppose there's a chance they might skip OctaneRender 3.1 completely and move straight to 4, releasing the cross-compiling support at that time.StrangerGuy - Wednesday, May 10, 2017 - link
SoC CPU/GPU performance has reached a point where it is almost purely academic: The underlying economics of the software running on them simply doesn't demand much performance out of even budget SoCs. Doesn't matter whether its Android or iOS either: It's either targeting the lowest common denominator hardware for maximum ad-eyeballs or bust.skavi - Wednesday, May 10, 2017 - link
Mobile VRosxandwindows - Wednesday, May 10, 2017 - link
Faster chips are still needed for local machine learning, and AI.Local image recognition on the iPhone is still slow with the a10 fusion.
darkich - Friday, May 12, 2017 - link
No you're all simoly just clueless about the state of mobile gaming.If anything, the GPU's are still way underpowered for the demands they are facing.
I'll just go by examples - in order to provide 60fps on native high end smartphone and tablet resolution(1400p+) games such as CSR 2, Dawn of Titans, Legacy of Discord, Marvel Contest of Champions ,Modern Combat versus, Transformers Forget to fight, Battle Bay, NBA 2k 17...need AT LEAST 200-400 GFLOPS of raw GPU power.
Now consider that mobile SoC's typically throttle to about 60% of their theoretic performance over the sustained gaming, and do the math.
Fun fact- ALL the games mentioned above play at around 30fps on native resolution even on current highest end hardware (iPad Pro)
Mat3 - Wednesday, May 10, 2017 - link
Was the Wizard GPU ever released?Alexvrb - Thursday, May 11, 2017 - link
It was released, in the way they release all their designs. Now if you're asking if a SoC manufacturer ever licensed and integrated Wizard into a product, I don't believe so. Without a major API vendor (Khronos, MS, Apple) supporting it, it's a niche feature.Yojimbo - Thursday, May 11, 2017 - link
Otoy said they are interested in taking advantage of it, but I haven't seen anything about that recently.vladx - Wednesday, May 10, 2017 - link
Oh yeah, how I'd love a phone with a Hisilicon+Furian SoC inside.SydneyBlue120d - Thursday, May 11, 2017 - link
This should be the GPU launch platform for Google Fuchsia :)MrPoletski - Thursday, January 25, 2018 - link
Give me a AIB GPU for my PC already.