AMD Kaveri Review: A8-7600 and A10-7850K Tested
by Ian Cutress & Rahul Garg on January 14, 2014 8:00 AM ESTFinal Words
As with all previous AMD APU launches, we're going to have to break this one down into three parts: CPU, the promise of HSA and GPU.
In a vacuum where all that's available are other AMD parts, Kaveri and its Steamroller cores actually look pretty good. At identical frequencies there's a healthy increase in IPC, and AMD has worked very hard to move its Bulldozer family down to a substantially lower TDP. While Trinity/Richland were happy shipping at 100W, Kaveri is clearly optimized for a much more modern TDP. Performance gains at lower TDPs (45/65W) are significant. In nearly all of our GPU tests, a 45W Kaveri ends up delivering very similar gaming performance to a 100W Richland. The mainstream desktop market has clearly moved to smaller form factors and it's very important that AMD move there as well. Kaveri does just that.
In the broader sense however, Kaveri doesn't really change the CPU story for AMD. Steamroller comes with a good increase in IPC, but without a corresponding increase in frequency AMD fails to move the single threaded CPU performance needle. To make matters worse, Intel's dual-core Haswell parts are priced very aggressively and actually match Kaveri's CPU clocks. With a substantial advantage in IPC and shipping at similar frequencies, a dual-core Core i3 Haswell will deliver much better CPU performance than even the fastest Kaveri at a lower price.
The reality is quite clear by now: AMD isn't going to solve its CPU performance issues with anything from the Bulldozer family. What we need is a replacement architecture, one that I suspect we'll get after Excavator concludes the line in 2015.
In the past AMD has argued that for the majority of users, the CPU performance it delivers today is good enough. While true, it's a dangerous argument to make (one that eventually ends up with you recommending an iPad or Nexus 7). I have to applaud AMD's PR this time around as no one tried to make the argument that CPU performance was somehow irrelevant. Although we tend to keep PR critique off of AnandTech, the fact of the matter is that for every previous APU launch AMD tried its best to convince the press that the problem wasn't with its CPU performance but rather with how we benchmark. With Kaveri, the arguments more or less stopped. AMD has accepted its CPU performance is what it is and seems content to ride this one out. It's a tough position to be in, but it's really the only course of action until Bulldozer goes away.
It's a shame that the CPU story is what it is, because Kaveri finally delivers on the promise of the ATI acquisition from 2006. AMD has finally put forth a truly integrated APU/SoC, treating both CPU and GPU as first class citizens and allowing developers to harness both processors, cooperatively, to work on solving difficult problems and enabling new experiences. In tests where both the CPU and GPU are used, Kaveri looks great as this is exactly the promise of HSA. The clock starts now. It'll still be a matter of years before we see widespread adoption of heterogeneous programming and software, but we finally have the necessary hardware and priced at below $200.
Until then, outside of specific applications and GPU compute workloads, the killer app for Kaveri remains gaming. Here the story really isn't very different than it was with Trinity and Richland. With Haswell Intel went soft on (socketed) desktop graphics, and Kaveri continues to prey on that weakness. If you are building an entry level desktop PC where gaming is a focus, there really isn't a better option. I do wonder how AMD will address memory bandwidth requirements going forward. A dual-channel DDR3 memory interface works surprisingly well for Kaveri. We still see 10 - 30% GPU performance increases over Richland despite not having any increase in memory bandwidth. It's clear that AMD will have to look at something more exotic going forward though.
For casual gaming, AMD is hitting the nail square on the head in its quest for 1080p gaming at 30 frames per second, albeit generally at lower quality settings. There are still a few titles that are starting to stretch the legs of a decent APU (Company of Heroes is practically brutal), but it all comes down to perspective. Let me introduce you to my Granddad. He’s an ex-aerospace engineer, and likes fiddling with stuff. He got onboard the ‘build-your-own’ PC train in about 2002 and stopped there – show him a processor more than a Pentium 4 and he’ll shrug it off as something new-fangled. My grandfather has one amazing geeky quality that shines through though – he has played and completed every Tomb Raider game on the PC he can get his hands on.
It all came to a head this holiday season when he was playing the latest Tomb Raider game. He was running the game on a Pentium D with an NVIDIA 7200GT graphics card. His reactions are not the sharpest, and he did not seem to mind running at sub-5 FPS at a 640x480 resolution. I can imagine many of our readers recoiling at the thought of playing a modern game at 480p with 5 FPS. In the true spirit of the season, I sent him a HD 6750, an identical model to the one in the review today. Despite some issues he had finding drivers (his Google-fu needs a refresher), he loves his new card and can now play reasonably well at 1280x1024 on his old monitor.
The point I am making with this heart-warming/wrenching family story is that the Kaveri APU is probably the ideal fit for what he needs. Strap him up with an A8-7600 and away he goes. It will be faster than anything he has used before, it will play his games as well as that new HD 6750, and when my grandmother wants to surf the web or edit some older images, she will not have to wait around for them to happen. It should all come in with a budget they would like as well.
380 Comments
View All Comments
geniekid - Tuesday, January 14, 2014 - link
Would've been nice to see a discrete GPU thrown in the mix - especially with all that talk about Dual Graphics.Ryan Smith - Tuesday, January 14, 2014 - link
Dual graphics is not yet up and running (and it would require a different card than the 6750 Ian had on hand).Nenad - Wednesday, January 15, 2014 - link
I wonder if Dual Graphics can work with HSA, although I doubt due to cache coherence if nothing else.While on HSA, I must say that it looks very promising. I do not have experience with AMD specific GPU programming, or with OpenCL, but I do with CUDA (and some AMP) - and ability to avoid CPU/GPU copy would be great advantage in certain cases.
Interesting thing is that AMD now have HW that support HSA, but does not yet have software tools (drivers, compilers...), while NVidia does not have HW, but does have software: in new CUDA, you can use unified memory, even if driver simulate copy for you (but that supposedly means when NVidia deliver HW, your unaltered app from last year will work and use advantage of HSA)
Also, while HSA is great step ahead, I wonder if we will ever see one much more important thing if GPGPU is ever to became mainstream: PREEMPTIVE MULTITASKING. As it is now, still programer/app needs to spend time to figure out how to split work in small chunks for GPU, in order to not take too much time of GPU at once. It increase complexity of GPU code, and rely on good behavior of other GPU apps. Hopefully, next AMD 'unification' after HSA would be 'preemptive multitasking' ;p
tcube - Thursday, January 16, 2014 - link
Preemtion, dynamic context switching is said to come with excavator core/ carizo apu. And they do have the toolset for hsa/hsail, just look it up on amd's site, bolt i think it's called it is a c library.Further more project sumatra will make java execute on the gpu. At first via a opencl wrapper then via hsa and in the end the jvm itself will do it for you via hsa. Oracle is prety commited to this.
kazriko - Thursday, January 30, 2014 - link
I think where multiple GPU and Dual Graphics stuff will really shine is when we start getting more Mantle applications. With that, each GPU in the system can be controlled independently, and the developers could put GPGPU processes that work better with low latency to the CPU on the APU's built in GPU, and processes for graphics rendering that don't need as low of latency to the discrete graphics card.Preemptive would be interesting, but I'm not sure how game-changing it would be once you get into HSA's juggling of tasks back and forth between different processors. Right now, they do have multitasking they could do by having several queues going into the GPU, and you could have several tasks running from each queue across the different CUs on the chip. Not preemptive, but definitely multi-threaded.
MaRao - Thursday, January 16, 2014 - link
Instead AMD should create new chipsets with dual AMU sockets. Two A8-7600 APUs can give tremendous CPU and GPU performance, yet maintaining 90-100W power usage.PatHeist - Thursday, February 13, 2014 - link
Making dual socket boards scale well is tremendously complex. You also need to increase things like the CPU cache by a lot. Not to mention that performance would tend to scale very badly with the additional CPU cores for things like gaming.kzac - Monday, February 16, 2015 - link
Having 2 or more APUs on a logic board would defeat the purpose of having an APU in the first place, which was to eliminate processing being handled by the logic board controller. With dual APU sockets, there would need to be some controller interjected to direct work to the APUs which could create a bottle neck in processing time (clock cycles). This is the very reason for the existence of multi core APUs and CPUs of today.Its my expectation that we will start to observe much more memory being added to the APU at some point, to increase throughput speeds. Essentially think of future APUs becoming a mini computer within, the only limitations currently to this issue are heat extraction and power consumption.
5thaccount - Tuesday, January 21, 2014 - link
I'm not so interested in dual graphics... I am really curious to see how it performs as a standard old-fashioned CPU. You could even bench it with an nVidia card. No one seems to be reviewing it as a processor. All reviews review it as an APU. Funny thing is, several people I work with use these, but they all have discrete graphics.geniekid - Tuesday, January 14, 2014 - link
Nvm. Too early!