In a blink-and-you’ll-miss-it moment, tucked inside of a larger blog post about announcements relating to this week’s FMX graphics conference, Intel has made its first official comments about hardware ray tracing support for their upcoming Xe GPUs. In short, the company will be offering some form of hardware ray tracing acceleration – but this announcement only covers their data center GPUs.

The announcement itself is truly not much longer, so rather than lead into it I’ll just repost it verbatim.

“I’m pleased to share today that the Intel® Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel® Rendering Framework family of API’s and libraries.”

Past that, there’s little concrete to be said, especially given the high-level nature of the announcement. It doesn’t specify whether ray tracing hardware support will be coming to the first generation of data center Xe GPUs, or if being on the road map means it will be coming in a future generation. Nor does Intel state which much clarity just what hardware acceleration entails. But since it’s specifically “hardware acceleration” instead of merely “support”, I would expect actual hardware for testing ray casting, especially in a high-end product like a data center GPU.

Overall, Intel’s blog post notes that the company will be taking a “holistic platform” approach on ray tracing, tapping both CPUs and GPUs for the task. So while GPUs will be a big part of Intel’s efforts to grow their ray tracing performance, the company will be looking to leverage both those and their traditional CPUs for future ray tracing endeavors. Intel will of course be the new kid on the block as far as GPUs go, so it’s not surprising to see that the company is looking to see how the use of these (and other new processor technologies) can be synergized with CPUs.

Source: Intel

POST A COMMENT

31 Comments

View All Comments

  • mode_13h - Thursday, May 2, 2019 - link

    I'm not sure about that. I think bifurcating their product line would end up reducing volumes on the RTX-series to the point that volumes would drop and prices would rise even further, which could push them out of the hands of even more consumers. Also, that would make it more expensive for the render farms and AI workloads they're trying to serve, in the cloud. So, the best move is to get the largest group of users to help shoulder the cost.

    But, there's another reason not to do it... I think their strategy was to leverage their current market-dominance to force ray tracing hardware into the market. The goal being to raise the bar so that even if AMD and Intel caught up with conventional GPU performance, Nvidia would have an early lead in ray tracing and AI.

    Like it or not, they do seem to be succeeding in breaking the typical chicken/egg problem of technological change, in this instance.
    Reply
  • Skeptical123 - Tuesday, May 7, 2019 - link

    I'm sure Nvidia had the money to create two separate high-end GPUs. I think it was simply a business decision to maximize profits not to do so. After all their internal company structure has used for years now the same basic chip design for home and industry uses just "cut down" to tier their product line pricing. I assume any changes there while less visible than say losses from reducing scale of manufacturing are far from consequential. Reply
  • mode_13h - Tuesday, May 7, 2019 - link

    You might be right on the pricing point, but this was clearly a strategic move to build the installed base of Tensor and RT cores in gaming PCs. That installed base is needed for software developers to user their capabilities, which will give Nvidia a lead that's difficult for AMD and Intel to erode.

    Basically, Nvidia is trying to make the GPU race about something more than mere rasterization performance, since it's only a matter of time before Intel and/or AMD eventually catch up on that front.
    Reply
  • HStewart - Wednesday, May 1, 2019 - link

    One thing to consider is that typically Intel will place it newer technology on enterprise components first before more it to main stream. Gamers are very small subset of Intel business Reply
  • HStewart - Wednesday, May 1, 2019 - link

    BTW Wccftech had no report of this, but they were fast to blab that Intel will not have 10nm till 2021 or 2022 for desktop and only limited qualities earlier. In truth, people really don't know what is next. Also blab that AMD will be on 5nm before Intel is on 10nm or 7nm. Reply
  • HStewart - Wednesday, May 1, 2019 - link

    This was for wrong article - oh I hate forums that don't allow to delete articles. - so be it. Reply
  • sa666666 - Wednesday, May 1, 2019 - link

    It must *REALLY* grind your gears that many people aren't dyed-in-the-wool fans of Intel. I think you see it as an affront to your existence or something. Reply
  • Korguz - Wednesday, May 1, 2019 - link

    sa666666 come on... leave the intel fanboy fanatic HStewart alone.. its very upset because intel fell asleep again... forgot how to innovate, left us stuck at quad core in the mainstream for what.. 4 or 5 years, only gave us a performance increase over the previous generation by what.. 10% over the previous generation, was the leader in manufacturing, but now isnt... and now, finally woke up, and is scrambling to catch up in almost every thing..... his beloved intel, is no longer the leader in cpu, or process tech... and he doesnt know how to deal, or handle it... Reply
  • mode_13h - Wednesday, May 1, 2019 - link

    They didn't fall asleep - they spent too much of their profits in dividends and stock buybacks, instead of plowing it back into their manufacturing tech. Reply
  • Korguz - Thursday, May 2, 2019 - link

    heh... same difference.. either way.. intel stumbled.. and is playing catch up again.... Reply

Log in

Don't have an account? Sign up now