This is something that initially caught me off-guard when I first realized it, but AMD historically hasn’t liked to talk about their GPU plans much in advance. On the CPU size we’ve heard about Carrizo and Zen years in advance. Meanwhile AMD’s competitor in the world of GPUs, NVIDIA, releases some basic architectural information over a year in advance as well. However with AMD’s GPU technology, we typically don’t hear about it until the first products implementing new technology are launched.

With AMD’s GPU assets having been reorganized under the Radeon Technologies Group (RTG) and led by Raja Koduri, RTG has recognized this as well. As a result, the new RTG is looking to chart a bit of a different course, to be a bit more transparent and a bit more forthcoming than they have in the past. The end result isn’t quite like what AMD has done with their CPU division or their competition has done with GPU architectures – RTG will talk about both more or less depending on the subject – but among several major shifts in appearance, development, and branding we’ve seen since the formation of the RTG, this is another way in which RTG is trying to set itself apart from AMD’s earlier GPU groups.

As part of AMD’s RTG technology summit, I had the chance to sit down and hear about RTG’s plans for their visual technologies (displays) group for 2016. Though RTG isn’t announcing any new architecture or chips at this time, the company has put together a roadmap for what they want to do with both hardware and software for the rest of 2015 and in to 2016. Much of what follows isn’t likely to surprise regular observers of the GPU world, but it none the less sets some clear expectations for what is in RTG’s future over much of the next year.

DisplayPort 1.3 & HDMI 2.0a: Support Coming In 2016

First and foremost then, let’s start with RTG’s hardware plans. As I mentioned before RTG isn’t announcing any new architectures, but they are announcing some of the features that the 2016 Radeon GPUs will support. Among these changes is a new display controller block, upgrading the display I/O functionality we’ve seen as the cornerstone of AMD’s GPU designs since GCN 1.1 was first launched in 2013.

The first addition here is that RTG’s 2016 GPUs will be including support for DisplayPort 1.3. We’ve covered the announcement of DisplayPort 1.3 separately in the past, where in 2014 the VESA announced the release of the 1.3 standard. DisplayPort 1.3 will introduce a faster signaling mode for DisplayPort – High Bit Rate 3 (HBR3) – which in turn will allow DisplayPort 1.3 to offer 50% more bandwidth than the current DisplayPort 1.2 and HBR2, boosting DisplayPort’s bandwidth to 32.4 Gbps before overhead.

DisplayPort Supported Resolutions
Standard Max Resolution
(RGB/4:4:4, 60Hz)
Max Resolution
(4:2:0, 60Hz)
DisplayPort 1.1 (HBR1) 2560x1600 N/A
DisplayPort 1.2 (HBR2) 3840x2160 N/A
DisplayPort 1.3 (HBR3) 5120x2880 7680x4320

The purpose of DisplayPort 1.3 is to offer the additional bandwidth necessary to support higher resolution and higher refresh rate monitors than the 4K@60Hz limit of DP1.2. This includes supporting higher refresh rate 4K monitors (120Hz), 5K@60Hz monitors, and 4K@60Hz with higher color depths than 8 bit per channel color (necessary for a good HDR implementation). DisplayPort’s scalability via tiling has meant that some monitor configurations have been possible even via DP1.2 by utilizing MST over multiple cables, however with DP1.3 it will now be possible to support those configurations in a simpler SST configuration over a single cable.

For RTG this is important on several levels. The first is very much pride – the company has always been the first GPU vendor to implement new DisplayPort standards. But at the same time DP1.3 is the cornerstone of multiple other efforts for the company. The additional bandwidth is necessary for the company’s HDR plans, and it’s also necessary to support the wider range of refresh rates at 4K necessary for RTG’s Freesync Low Framerate Compensation tech, which requires a 2.5x min:max ratio to function. That in turn has meant that while RTG has been able to apply LFC to 1080p and 1440p monitors today, they won’t be able to do so with 4K monitors until DP1.3 gives them the bandwidth necessary to support 75Hz+ operation.

Meanwhile DisplayPort 1.3 isn’t the only I/O standard planned for RTG’s 2016 GPUs. Also scheduled for 2016 is support for the HDMI 2.0a standard, the latest generation HDMI standard. HDMI 2.0 was launched in 2013 as an update to the HDMI standard, significantly increasing HDMI’s bandwidth to support 4Kp60 TVs, bringing it roughly on par with DisplayPort 1.2 in terms of total bandwidth. Along with the increase in bandwidth, HDMI 2.0/2.0a also introduced support for other new features in the HDMI specification such as the next-generation BT.2020 color space, 4:2:0 chroma sampling, and HDR video.

That HDMI has only recently caught up to DisplayPort 1.2 in bandwidth at a time when DisplayPort 1.3 is right around the corner is one of those consistent oddities in how the two standards are developed, but none the less this important for RTG. HDMI is not only the outright standard for TVs, but it’s the de facto standard for PC monitors as well; while you can find DisplayPort in many monitors, you would be hard pressed not to find HDMI. So as 4K monitors become increasingly cheap – and likely start dropping DisplayPort in the process – supporting HDMI 2.0 will be important for RTG for monitors just as much as it is for TVs.

Unfortunately for RTG, they’re playing a bit of catch-up here, as the HDMI 2.0 standard is already more than 2 years old and has been supported by NVIDIA since the Maxwell 2 architecture in 2014. Though they didn’t go into detail, I was told that AMD/RTG’s plans for HDMI 2.0 support were impacted by the cancelation of the company’s 20nm planar GPUs, and as a result HDMI 2.0 support was pushed back to the company’s 2016 GPUs. The one bit of good news here for RTG is that HDMI 2.0 is still a bit of a mess – not all HDMI 2.0 TVs actually support 4Kp60 with full chroma sampling (4:4:4) – but that is quickly changing.

FreeSync Over HDMI to Hit Retail In Q1’16
Comments Locked

99 Comments

View All Comments

  • pt2501 - Tuesday, December 8, 2015 - link

    Agreed my r9 280 (aka 7950 boost) was sold to an individual with an alienware x51 with only a 330W supply. I thought it was going to be a lost cause because of the power requirements is a 500 W Power Supply.

    Well turns out if you underclock r9 280 by at least 250 Mhz it works fine, and can still play world of warships on ultra at 1080p.

    GCN is an impressive architecture that has scaled very well since its release.
  • ImSpartacus - Tuesday, December 8, 2015 - link

    You know that the 380 has a tdp of 190w, right? Toss in 90w for the cpu and you have 50ishW for other stuff on that 330w psu.

    You could probably plug a 980 (advertised 165w) in there and it would do just fine.
  • Yorgos - Thursday, December 10, 2015 - link

    or use a more high tech and efficient Fury Nano and try for free all the goodies like freesync and next generation features that only gcn offers.
  • zodiacsoulmate - Tuesday, December 8, 2015 - link

    500w is recommand spec considering other components also drawing power from PSU.
    the 280 itself should draw no more than 250w.
  • Mr Perfect - Tuesday, December 8, 2015 - link

    That's part of the problem though, isn't it? If a four year old card isn't much behind the current card, then things have stagnated. We need to start moving forward again, and AMD needs people buy a new card more then twice a decade.
  • Samus - Tuesday, December 8, 2015 - link

    They can only milk it for so long. The R9 380 is basically the same die and configuration as the 7950, a 4 year old card. NVidia is about to release their third architectural leap over AMD since GCN came out. Not good. GCN scales well, but not for performance. Fury is their future.
  • DragonJujo - Tuesday, December 8, 2015 - link

    The R9 380 is an update of the R9 285; the Tahiti Pro chip in the HD 7950 was rebadged as the R9 280 and then dropped in the 300 series.
  • e36Jeff - Tuesday, December 8, 2015 - link

    Fury is an updated GCN, the same generation as Tonga(GCN 1.2). The real issue has been that they've been stuck at 28nm for years now because the 20nm node fell through. This year when they(and Nvidia) move down to the 14/16nm node we should see some pretty good jumps in performance from both sides of the isle. Add HBM to that, and the 2016 looks like a pretty sweet year for GPUs.
  • beginner99 - Wednesday, December 9, 2015 - link

    I doubt we will actually see this jumps in 2016 an my reference point of measurement is performance/$. You can get 290 of ebay for $200 or if you bought them before 300 series came out they were about that price new. You will never get that performance/$ in 14 nm because the process is more expensive.
  • frenchy_2001 - Friday, December 11, 2015 - link

    The process is more expensive, but the die will be significantly smaller.
    We will have to wait and see how the 2016 crop of GPU will end up, but 14/16nm FF will give:
    - smaller dies
    - smaller power consumption
    - possible faster clocks
    Now to see if those improvements from process can actually be harnessed by huge GPU dies (20nm planar, for example, did not scale well). We will have to wait as there is currently no such big and power hungry die made with 14/16nm node (Samsung/TSMC).

Log in

Don't have an account? Sign up now