In a blink-and-you’ll-miss-it moment, tucked inside of a larger blog post about announcements relating to this week’s FMX graphics conference, Intel has made its first official comments about hardware ray tracing support for their upcoming Xe GPUs. In short, the company will be offering some form of hardware ray tracing acceleration – but this announcement only covers their data center GPUs.

The announcement itself is truly not much longer, so rather than lead into it I’ll just repost it verbatim.

“I’m pleased to share today that the Intel® Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel® Rendering Framework family of API’s and libraries.”

Past that, there’s little concrete to be said, especially given the high-level nature of the announcement. It doesn’t specify whether ray tracing hardware support will be coming to the first generation of data center Xe GPUs, or if being on the road map means it will be coming in a future generation. Nor does Intel state which much clarity just what hardware acceleration entails. But since it’s specifically “hardware acceleration” instead of merely “support”, I would expect actual hardware for testing ray casting, especially in a high-end product like a data center GPU.

Overall, Intel’s blog post notes that the company will be taking a “holistic platform” approach on ray tracing, tapping both CPUs and GPUs for the task. So while GPUs will be a big part of Intel’s efforts to grow their ray tracing performance, the company will be looking to leverage both those and their traditional CPUs for future ray tracing endeavors. Intel will of course be the new kid on the block as far as GPUs go, so it’s not surprising to see that the company is looking to see how the use of these (and other new processor technologies) can be synergized with CPUs.

Source: Intel

POST A COMMENT

31 Comments

View All Comments

  • Lord of the Bored - Wednesday, May 1, 2019 - link

    More like Intel won't make something available on mainstream parts if they can fuse it off and sell the same part with the fuse unblown for a 10x markup. Reply
  • Yojimbo - Wednesday, May 1, 2019 - link

    The FMX conference is about visual effects. Intel made this announcement letting those visual effects (film industry) customers know that they are planning on releasing GPU-acceleration for render farms. It should not be taken to mean that Intel will not be coming out with ray tracing acceleration for consumer GPUs until later or not at all, since that seems to be out of the context of the announcement. In fact, it's a good guess that since Intel will have ray tracing acceleration capabilities then they will be interested in adding them to their consumer GPUs. Reply
  • Yojimbo - Wednesday, May 1, 2019 - link

    Oh, and don't take the "holistic platform" thing too seriously. All the information Intel releases publicly is heavily influenced by marketing. Intel has a CPU and their main competitor in terms of render farms is NVIDIA, who doesn't have a CPU. So it's probably much like how AMD has been marketing their data center approach as "hey we have GPUs and GPUs so we can combine them" without telling anything specific and significant why that makes any difference. I would certainly not assume that Intel has any superior way of combining resources when using their CPUs with their GPUs in comparison with what could be done with their CPUs and NVIDIA's GPUs, unless, of course, Intel restricts access from their competitors to a faster data pipeline between their CPU and their GPU. I'd imagine that would land them in antitrust trouble extremely quickly, though. Reply
  • mode_13h - Wednesday, May 1, 2019 - link

    > I would certainly not assume that Intel has any superior way of combining resources when using their CPUs with their GPUs in comparison with what could be done with their CPUs and NVIDIA's GPUs

    It's called CXL, and Nvidia isn't (yet) a consortium member (Mellanox now is, but that might not mean anything for Nvidia's GPUs...):

    https://www.anandtech.com/show/14068/cxl-specifica...
    Reply
  • Yojimbo - Thursday, May 2, 2019 - link

    I know about CXL but it was not important to the discussion. The point was any faster data pipeline. Intel most likely came out with CXL because of their GPU and FPGA strategy. They could have allowed it years ago, as there has been a clear market demand for it for a while. But Intel has had such a dominant market share in servers that they didn't really need to worry about lacking that capability.

    They cannot restrict it once they have it on their platform, as I said before. NVIDIA doesn't need to be a consortium member in order to use it. It's being delivered as an open industry standard, probably because once Intel decided to go down that road they saw it as advantageous to try to kill off CCIX so they have more control over the situation.
    Reply
  • wumpus - Wednesday, May 1, 2019 - link

    I still wouldn't expect more than minimal support for hardware ray tracing (although it looks like the critical issues for the RTX units involve filtering the ray tracing effects, so presumably more support for that, especially if it includes additional machine learning bits [which have a much higher market anyway]).

    Then again, that assumes a non-raytracing graphical need for GPUs in the datacenter. Aside from google wanting to move gaming there, I don't think there's much of a need. Perhaps they will be built entirely around raytracing (with the obvious ability to provide compute for all the non-graphics GPU needs in the datacenter: I'd think these should be higher than the need for raytracing, but can't be sure).
    Reply
  • mode_13h - Wednesday, May 1, 2019 - link

    > it looks like the critical issues for the RTX units involve filtering the ray tracing effects

    That's just for de-noising global illumination, where they bring their tensor cores to bear on the problem.

    Tom's has a pretty good article discussing the different types of ray tracing effects and benchmarking games on various RTX and GTX cards to see how they handle the load. Definitely worth a look!

    https://www.tomshardware.com/reviews/nvidia-pascal...
    Reply
  • Cullinaire - Wednesday, May 1, 2019 - link

    Forget Ray tracing for a moment... Let's hope they get raster right first. Reply
  • casperes1996 - Wednesday, May 1, 2019 - link

    Whilst I wouldn't say surprising for an Intel graph, the astonishingly meaningless amounts of information in that graph at the top, is quite spectacular. It can essentially be broken down to "hardware gets faster with time, by some unspecified value, and there'll be multiple levels of performance for some hardware"...
    It says teraflops to petaflops in the title, but the performance axis is unlabeled, so we have no idea where the divide is. And does the circle being bigger mean that the products within that family will range from the top and bottom of the circle radius relative to the performance axis? In the pop-out extending out from the Xe family, do the product categories then correspond to performance within the radius of the circle relative to placement in the stact, or does the pop-out stack itself place the performance on the axis? Or is it all just nonsense anyway with no meaningful placement on the performance axis outside of "faster per year"?
    Reply
  • mode_13h - Thursday, May 2, 2019 - link

    You're over-thinking it. All you're supposed to get from that slide is:
    * launch in 2020
    * full product stack, from integrated + entry-level to enthusiast & data center
    * (presumably) up to teraflops per GPU, scaling up to petaflops per installation.

    That's it. Now calm down and stop trying to treat marketing material as though it's real technical literature. In my experience, it's only other marketing people who seem to have trouble seeing through the smokescreen and spin that pervade industry marketing material.
    Reply

Log in

Don't have an account? Sign up now