Ever since Raja Koduri abdicated his leadership post at AMD’s Radeon Technologies Group in favor of a similar post at Intel, the big question on everyone’s mind has been what would become of the RTG and who would lead it. While AMD – and ATI before it – is far from a stranger to graphics, the formation of the RTG in 2015 marked an important point in the development of AMD’s graphics business, as it was finally re-made whole under the AMD umbrella. As a business-within-a-business, RTG was largely seen as Raja’s baby, so his departure was destined to have a large impact.

Now just a bit over two months later, we finally know the fate of RTG. Today AMD is announcing the new leadership for RTG, bringing on two new Senior Vice Presidents to run the division. Meanwhile the company is also making some organizational changes to RTG, which although will keep RTG intact as a singular entity, will see it internally organized into dedicated engineering and business teams, each in turn reporting to their respective SVP heads. This change means that RTG doesn’t have a true, singular leader – not that Raja had full overview over RTG’s business operations, I’m being told – but rather both of the new SVPs report to AMD’s CEO, Dr. Lisa Su, and together represent the business unit.

Finally, while in the process of reorganizing and rebuilding the leadership of the RTG, AMD is announcing that they have also reassigned the semi-custom business unit. Semi-custom is now being folded into the business side of RTG, and while AMD’s announcement doesn’t note that there are any changes in how the semi-custom unit will operate, it will fall under the purview of the head of RTG’s business group.

RTG Engineering Group Leadership: David Wang, Senior Vice President of Engineering

First off then, let’s talk about the engineering side. The new head of RTG’s engineering efforts (and arguably the de-facto successor to Raja) will be David Wang. Wang is a name that some long-time followers may be familiar with, as until earlier this decade he was a long-term AMD employee that rose through the ranks to become a Corporate Vice President of AMD’s GPU and SoC efforts. And as you might expect for someone hired for AMD’s top GPU engineering post, Wang has a long history in the graphics industry, working for SGI before joining ArtX and going through the ArtX-to-ATI-to-AMD acquisition chain. Specific to his experience at AMD, Wang has worked on every AMD GPU between the R300 and the Southern Islands (GCN 1.0) family, so he’s seen the full spectrum over at AMD.

More recently, he’s been serving as the Senior VP of Synaptics, where he was one of several former ATI/AMD employees to jump ship over there around the turn of the decade. So for David Wang, in a sense this is coming back home to AMD. Which off the top of my head makes him the third such high profile engineer to jump out and back in over the last decade, after Raja Koduri and CPU guru Jim Keller.

Wang re-joins AMD at a critical time for its engineering group. With the Vega launch behind it, RTG’s engineering staff is in the middle of development of the Navi GPU architecture and beyond, all the while putting the finishing touches on Vega Mobile for this year and squeezing in a 7nm Vega design for servers for 2019. Vega’s launch has been a contentious one – for engineering reasons as much or more so than business reasons – so Wang may very well be a much-needed breath of fresh air for RTG’s engineering teams.

Officially, AMD is stating that in his position, Wang will be responsible for “graphics engineering, including technical strategy, architecture, hardware and software for AMD graphics products,” the last item in particular being notable, as this confirms that software is staying under the control of the engineering group rather than being more distributed through AMD as it once was.

RTG Business Group Leadership: Mike Rayfield, Senior Vice President and General Manager

David Wang’s counterpart on the business side of matters will be Mike Rayfield, who is being tapped to serve as the Senior VP and the General Manager of the business group. Rayfield is another industry veteran, albeit one without the same immense GPU history of Wang. Instead Rayfield’s history includes posts such as the VP and General Manager of NVIDIA’s mobile business unit (Tegra) and a director of business development at Cisco. Most recently (and for the past 5 years), Rayfield has been at Micron serving as the memory manufacturer’s VP and GM for their mobile business unit, which houses Micron’s mobile-focused memory and storage product groups.

AMD notes that Rayfield has “30 years of technology industry experience, including a deep understanding of how to grow a business and drive results,” which says a lot about AMD’s direction in very few words. Instead of hiring another GPU insider for the post, AMD is bringing in someone from outside the industry altogether, someone who has plenty of experience with managing a growing business. RTG’s business struggles are of course well-known at this point, so this offers AMD the chance to reset matters and to try to fully righten the conflicted business unit.

Rayfield is definitely joining RTG and AMD at an interesting time. On the business side of matters RTG is contending with the fact that Vega (and practically every other GPU AMD makes) is proving incredibly popular with cryptocurrency mining groups, to the point that, above the entry-level market, the North American consumer market has been entirely depleted of AMD-based video cards. In the short term this means that AMD is selling everything they can make, but Rayfield will have to help AMD grapple with the long-term effects of this shift, and how to keep the newly minted mining customers happy without further losing disillusioned gaming customers.

AMD Semi-Custom Folded Into RTG

Meanwhile along with overseeing the traditional business aspects of the GPU group, Rayfield is also inheriting a second job, overseeing AMD’s semi-custom business unit. Previously a part of the Enterprise & Embedded unit, semi-custom is being moved under the business side of RTG, putting it under Rayfield’s control. This reorganization, in turn, will see the Enterprise & Embedded unit separated from semi-custom to become the new Datacenter and Embedded Solutions Business Group, which will continue to be operated by SVP and GM Forrest Norrod.

Truth be told I’m still not 100% sure what to make of this change. AMD’s semi-custom focus was a big deal a few years back, and while the business has done well for itself, the focus at the company seems to have shifted back to selling processors directly, built on the back of the technological and sales success of the Zen architecture and its derivative products. Meanwhile, AMD is citing the fact that graphics is a core part of semi-custom designs as the rationale for putting it under the RTG. AMD still believes that the semi-custom business is more than just providing processors to the game console industry, but I do think that after a few years of pursuing this route, that this is the realization that game consoles are going to remain the predominant semi-custom customer. So in this respect it’s a bit of a return to status quo: game consoles are now once again part of the graphics business. And in the meantime semi-custom has certainly become a lower priority for AMD, as it’s a very regular in volume but relatively low in gross margin business for the company compared to their increasing CPU fortunes.

Finally, speaking of finances, it’s worth noting that AMD is calling this an increase in the company’s investment in RTG. The company isn’t throwing around any hard numbers right now, but it sounds like overall the RTG budget has been increased. AMD has of course struggled mightily in this area as a consequence of their lean years – in particular, it seems like AMD can’t complete as many GPU die designs per year as they once did – so combined with the shift in leadership, I’m optimistic that this means that we’re going to see AMD shift to an even more competitive position in the GPU marketplace.

Source: AMD

Comments Locked

50 Comments

View All Comments

  • jjj - Tuesday, January 23, 2018 - link

    Don't forget that GPU also means server, cars and whatnot and that requires additional investments as GPUs and machine learning diverge.

    Navi should tape out this year so doubt much can change - we can't blame the new guys for it, good or bad.
  • eva02langley - Tuesday, January 23, 2018 - link

    Definitely good news, it gives a good impression that AMD definitely takes things more seriously. Let's just hope this prove right and it gives AMD the opportunity to expand into broader horizon.
  • rahvin - Tuesday, January 23, 2018 - link

    Not just diverge, completely separate. I'd be willing to bet that within 5 years machine learning and AI is using custom CPU's and ASICs. No one will be doing it on graphics chips or general CPU's. There's a dozen firms working on AI chips, heck Intel has their own version, in the end it's going to have it's own accelerator just like GPU's.
  • jjj - Tuesday, January 23, 2018 - link

    Actually it's well beyond that in the long run as Moore's Law and Von Neumann compute are reaching their limits. The end goal - nothing is really an end goal, everything just buys time- is a non-volatile switch for brain like devices where compute and memory are intermixed
  • Kevin G - Wednesday, January 24, 2018 - link

    We could explore things like the classic Harvard architecture or twists on that. The big factors that contributed to the creation Von Neumann don't have the same pressure as they do today. Instructions could not only be separated into their own memory domain and bus but also use their own distinct memory type like SRAM to lower latency while bulk data gets the benefit of greater DRAM density or even 3D xpoint.

    I've been curious what the effects would be by segmenting caches by data type. So a chip would have its traditional L1 instruction cache but it would have a data cache for integer compute, scalar FP and then SIMD FP. Bandwidth and latency can be tuned for a particular data type.
  • mode_13h - Wednesday, January 24, 2018 - link

    Cool, so you burn a lot of transistors on cache that's only used some of the time? I think that's probably why nobody does it.

    If you really wanted to tweak cache allocation between different data types, then the way to do it would be by tweaking the eviction policy. Then, you could still share the underlying data cache & potentially the whole thing could be used by any data type when no others are in use. It could also get around the implied requirement of each cacheline having to contain homogeneous data types.

    All that being said, I'm still damn skeptical.
  • Kevin G - Friday, January 26, 2018 - link

    The reason why no one attempts a true Harvard architecture today is due to legacy software: nothing would work that is currently out there of any significance. Compilers would have to be rewritten and even then, commodity open source may need further modification to even work. Never underestimate the power of legacy code base as a resistance to change.

    Caching based upon data type was attempted only in one chip which never saw commercial release: the Alpha EV8. Its 2048 bit wide vector unit would pull data directly from the L2 cache due to the size of the data being moved. The L2 cache itself was tuned to the size of SIMD unit but was not exclusively for SIMD usage. There was still L1 data caches to feed standard integer and floats.
  • tuxRoller - Wednesday, January 24, 2018 - link

    There was a paper from a few months ago that showed how a memristor-type memory could also be used for compute. Most importantly, they also indicated that such an architecture would massively increase both performance and efficiency.
    Now, all we need is the memristor....
  • mode_13h - Tuesday, January 23, 2018 - link

    Except it's not like AI is a solved problem. To address the widest range of applications, you still need flexibility. And GPUs are great for that.

    I'm not saying there won't be purpose-built AI chips, but I think AMD and Nvidia are best-positioned to deliver on that. For instance, imagine V100 without the fp64 or graphics units. Perhaps you could simplify in a few other areas, as well, and maybe replace compressed texture support with support for compressed weights.

    Given the maturity of their toolchain, their installed base, and the solid foundation of an efficient, scalable, programmable architecture, GPUs are actually pretty hard to beat.
  • Kevin G - Wednesday, January 24, 2018 - link

    The biggest thing going for GPUs in this context is that they rarely get true binary code explicitly written for them. Shaders get passed through a just in time compiler in the majority of cases. this has allowed GPU designs to change their designs seemingly on a whim. For example, AMD had several generations of VLIW5 chips, then two VLIW4 designs in the middle of a generation before switching over to GCN 1.0.

    Adding dense processing units like nVidia's Tensor Cores for matrix multiplication can easily be done with the GPU philosophy as the pressure to support legacy code isn't there like it is in the CPU world.

Log in

Don't have an account? Sign up now