That's nothing new. Consider that Nvidia didn't appoint themselves to those positions. If the majority of Khronos members felt that Nvidia was derailing the body's activities, you'd think they'd get voted out, right?
I don't know when it became a trend that a "standard" has features that are optional. It's not a standard if something that says it is but isn't. USB-C compatibility is already a mess and HDMI 2.1 might be next. Are engineering schools nowadays taught by politicians?
my thoughts exactly, Khronos saw the "success" of the USB-IF group and what an absolute mess 3.0 3.1 gen 1, 3.2 gen 1x1 all being the same port is and figured, hey, let's do that!
No they saw the market closing ranks, and are trying to get a foot in the door before it's completely closed off bu CUDA and One API... nothing like the fiasco of the USB geniuses.
The only issue with USB-A standards is confusing naming. They all support exactly the same thing: data transfer, just at varying speeds, and they are 100% cross-compatible. For most applications any port will work, and even for high bandwidth stuff anything faster than 2.0 is typically fast enough (few devices even saturate a 5Gbps link). The OP here is referring to USB-C and its much more complex implementations: from USB-only at 2.0 speeds on the one hand to TB3 or USB 3.2G2x2 with DP alt mode and PD support on the other - the span in features and functionality is quite huge. Still, this kind of flexibility is essential with modern do-it-all standards - it wouldn't make sense whatsoever if your laptop charger needed to support USB 3.2G2x2 data transfers or DP alt mode, after all, nor does it make sense for a phone to support all of that just to be able to use a type-C port for charging. What makes it confusing is when logical-seeming features are skipped, such as laptops with type-C ports without PD support or DP alt mode support.
Does your gizmo charging port need to implement HDMI output and Thunderbolt just because it is Type-C? Or should there be an exception for devices that just need to charge using the USB-PD standard? Maybe your keyboard and mouse don't need the full USB 3.1 Gen 2 bandwidth and can get away with just USB 2? Whoops, there's that 'optional standard' in action again!
Its nice in theory to demand every Type-C port you come across to be full-featured all-singing-all-dancing with support for every Alt Mode available, but that is not practical in the real world. Yes, manufacturers can do better in actually implementing the damned standardised port labelling and capability reporting (when you plug in a USB device into a port that does not support that mode the USB 2 channel should be used to flag up the feature incompatibility in an obvious "Hi, that port can't support HDMI" message, ideally with the device manufacturer indicating what ports, if any, would/ support that). but that's down to manufacturers dropping the ball in actually implementing the standard rather than the standard itself.
The problem with "optional" features is that sellers often, if not always, don't label what is lacking.
A core and "different" features are OK but a "standard" must have strict rules of labeling. Right now I have to check the technical specs just to see what kind of type-C device I am getting, and 9 out of 10 times a $5 USB-C cable does not have such info.
Thunderbolt 4 is just that. TB3 + USB4 certified to be 100% standards compliant at the highest spec for each. TB4 itself is just a certification process. With that cert you know that the TB3 is full spec high perf as is the USB4 and common connector. You should not be buying a $5 USB C cable - not need for Monster Cable money, but never cheap out on a cable - you can have the best host adapter in the world and the best removable HD interface and then monkey it up with a POS cable.
It's been this was for a while -- even something like JPEG has many parts to it, quite a few of which almost no-one uses. Same for something like WiFi. The real issue is how "core" the optional functionality is.
One sort of optionality is: most people don't need to do X, but if you do need to do X you do it this ONE way. A different sort of optionality is: everyone needs to do Y, but you can do it via method A or method B or method C, choose whichever you feel like.
The first is sometimes justifiable; the second is really not helpful.
You kiddies don't have the historic view that would inform you that RISCV is just the latest in a LONG string of "next greatest thing ever"
WIll be GREAT as a HD controller - but much past that ... ARM is well established in the world, and STILL is a 3rd rate platform for anything other than a cell phone.
There will be nothing dislodging x86-64 anytime soon - has been that way for ages, and ZERO reason to expect a shift - and the few piddly number of desktop PCs Apples sells is irrelevant
The rather successful MPI standard has had optional features since version 1. It's rather common, in fact, and not detrimental if done right. The point is that if you're going to implement a feature, it should meet a certain specification so that software (or hardware, etc.) designed for it will be portable and reusable. It's questionable whether OpenCL is doing this right, though.
It's the sane approach when the standard covers devices having different capabilities. Of course, leaving everything at the mercy of the implementer seems a bit extreme. I think it's better when standards at least define several profiles so it'e easy to tell what you're getting.
Just imagine working with OpenCL 3.0: I would have to query for each and every feature I want to to use and, on top of that, implement an alternate code path for each unavailable feature. And (despite what the article/Khronos claims) it's not like developers were flocking towards OpenCL as it is. But this is just my quick assessment, I'm hoping the heads that brought us OpenCL 3.0 knew better.
They may use NVIDIA's logo on their slides, but NVIDIA would rather you use CUDA. Khronos should concentrate on getting NVIDIA to abandon CUDA if they want to see OpenCL be used.
Hahaha sure nVidia will give up the core software that sells billions of dollars worth of deep learning hardware. There's not a snowflake's chance in hell of that happening.
CUDA built up the GPGPU industry and underpins a very large proportion of code in industry and academia. Even if Nvidia decided they wanted to go OpenCL-only, they'd still have to support CUDA in devices and drivers for a good decade anyway.
Good luck with that. And don't be too harsh on nV, they were fully justified. Too many companies have concluded that standards should be a land grab, a way to force everyone to use their crappy idea because either they have a patent on it, or they already have it working in the lab so can productize it cheaply.
Apple did the same thing, and for the same reason: OpenGL was a dumpster fire and kept getting worse, so Apple, like nV, said "To hell with this", invented their own (covering both GL and CL), and, let's be honest, did a much better job.
Committees may be necessary, but even under the best conditions they have no clarity, no vision, no taste. And often they don't operate under the best conditions...
To be honest, there's value in companies occasionally and very publicly walking away from a spec, just to remind everyone that, no, you cannot pile the garbage indefinitely high and continue to expect everyone to keep eating it, forever and ever.
OpenGL goes back to Silicon Graphics Iris-GL - so it is the OG of the 3d APIs... There were tons of other GLs - but none of them had the success in the 3D world that at one time Silicon Graphics owned.
Yeah, don't let facts get in the way of your narrative.
> so Apple, like nV, said "To hell with this"
Um, CUDA predated OpenCL. You can see a lot of similarities between the two, where CUDA (and its influences) shaped OpenCL. Except, with OpenCL, they got to clean out some of the cruft that built up in CUDA and make it simpler and more self-consistent.
> Committees may be necessary, but even under the best conditions they have no > clarity, no vision, no taste. And often they don't operate under the best conditions...
Hmmm... what committee-designed standards might you be using, right now.
* Ethernet * Wifi * Bluetooth * 4G cellphone * PCIe * USB * HDMI * DisplayPort * DDR4 * HTTP, HTML, Javascript * maybe OpenGL or Vulkan? * XML * JPEG * MPEG-2/4, H.264/H.265, eventually AV1
Yeah, you're right. Standards committees don't work. F 'em.
Nvidia supports cuda and opencl. Nvidia just develops cuda libraries for their cards as a selling point. No company is attempting to push out opencl. AMD and Nvidia were literally part of the collaboration team designing openCL.
It is more likely you will abandon your body before Nvidia abandons CUDA - OpenCL is dead.
CUDA and One API are the only 2 APIs that matter - and wouldn't be getting Nvidia to abandon CUDA - its the developers who standardized around CUDA a LONG TIME AGO - Even Intel will have some problems cracking that ecosystem.
The solution to no one using OpenCL's current versions is to make everything optional since the last popular version such that OpenCL's latest version looks widely used on paper now that a whole host of things are suddenly compliant. Bona fide brilliance!
more of a weak "we support it" than anything for Nvidia. OpenCL will not have a significant impact. CUDA is entrenched because Nvidia did the work with outreach and support starting a long while back. Only Nvidia was looking forward to GPUs being compute systems - and almost over night the TOP500 wasn't mostly CPU anymore. OpenCL has the disadvantage of 2 of the 3 major accelerator makers have their own API. So OpenCL is a product that doesn't have much of a home, or a home team supporting it.
As to 2 of the major companies having their own API....
Well CUDA obviously. But if the second you mean is oneAPI? Is that a joke? The thing has no place in the market yet - it is still in the process of being built, not to mention it is built atop OpenCL or SYCL.
There's no reason in theory that you couldn't run oneAPI on nVidia or AMD hardware because unlike CUDA it is open source and doesn't require recompilation to work elsewhere.
If the problem is certain features not being worth implementing on certain platforms, why not split OpenCL into a few different versions that are appropriate for embedded, standard graphics cards, dedicated cloud compute, etc.?
There is nothing more frustrating than needing a particular feature a vendor has neglected to implement for no discernible reason, and this basically gives them permission to do so.
Yeah because what we NEED is more fragmentation... I guess the whole premise of the article didn't sink in.
First off, the main vendor is Nvidia - and most CUDA devs are quite happy with what NVidia has provided and then offered support to those devs.
So who are these "vendors" that now have permission? Nvidia - CUDA. Intel - One API. Maybe AMD - but like it says it's OpenCL is a mess (surprise, AMD software not upto snuff)... So the market leader Nvidia is being joined by the 900# gorilla in this space - so 90%+ of the market is covered by these 2 APIs.
No one is happy with vendor lock in - it isn't a very good idea from a developer stand point.
I guarantee you that if they could shift their implementations with ease to something that had the feature + performance parity while having hardware agnostic capability they would do it.
The main problem with vendor lock in is that it allows said vendor to monopolise the market, and therefore set the prices as they see fit. The ridiculous cost of nVidia pro GPU's is the result of that lock in - out of the students in my 3D animation class not a single one was willing to shell out for RTX hardware to boost rendering speed vs buying an AMD Threadripper to increase CPU rendering.
It's not only CUDA-dependence, though. Nvidia could also charge a premium because their hardware was consistently far ahead of anyone else, in AI performance.
However, Habana Labs, recently bought by Intel, posted up some benchmarks that totally dominated Nvidia. So, that could be starting to change. Cerebrus is another one to watch.
It's just an API for like 5% of the market... Even Apple has moved on.
At least the Vulkan and VulkanRT are going strong and getting stronger - same Kronos group. OpenGL and DirectX are viable targets for a 3rd player - and that's where Vulkan fits in ... OpenCL doesn't have the same favorable market segment.
Apple, Sony and Microsoft are not supporting it. On Windows it is only supported on classical Win32, sandboxed Win32 and UWP don't allow the ICD driver model used by OpenGL/Vulkan drivers.
On Switch it is available, but a large majority of game developers go with NVN or middleware.
On Android, it is only a required API as of Android 10, between Android 7 and 10 only a couple of Samsung and Google flagship phones actually had proper support for it, hence why Google made it compulsory as of Android 10, given that adoption was going nowhere.
OctaneRender one of the biggest rendering engines used in Hollywood VFX just annouced that they decided to give up on the Vulkan backend and are going with Optix 7 (CUDA based) instead.
Clearly you didnt read the article very well - you can implement Vulkan on top of Metal through MoltenVK, there is nothing special about Metal that makes it superior to Vulkan, Apple are just being their usual pedantic walled garden selves to no benefit of the consumer, and especially not to the devs that have to wade through their continuing stream of refuse. The only light at the end of the tunnel is that they are not blocking use of MoltenVK, they know full well that devs may abandon them if they do so.
As to Sony and Microsoft - Sony have always used their own proprietary API's in Playstation development, so that is not a surprise.
On the MS side they do support Vulkan on x86 Windows, which is not an inconsiderable market place - the UWP marketplace is all but useless to most gamers on Windows, even MS have begun to consider abandoning it altogether by merging UWP features into Win32. For Windows on ARM they are planning to support OGL and OpenCL through DX12, and likely will do so for Vulkan too, as shown above in the table showing gfx-rs in that position.
As for OctaneRender - Otoy have a history of making promises to move away from nVidia and breaking them, it's a basic business practice they call "driving up the price", do you seriously think they don't get a handsome deal from nVidia for keeping it that way? Otoy were just scaring nVidia into driving up the compensation in that deal, they have as much integrity as my rear end after a strong curry.
I think the main reason Apple doesn't ban MoltenVK is because they know it incurs a performance penalty. In that sense, it doesn't eliminate the incentive to develop Metal-native apps.
I think you're wrong about MS supporting Vulkan. I'm 99% certain the Vulkan support on Windows is entirely supplied by the GPU vendors, themselves. It's just that MS doesn't block them from doing so, the way that Apple does on Macs. On Macs, OpenGL is deprecated and the only other graphics API that a driver is allowed to natively support is Metal.
This is what happens when you put competing company's employee a Head of your organisation. Nvidia is very good at destroying their competition and Khronos put Nvidia's employee as head of OpenCL project. Good, very good. OpenCL now irrelevant, so Vulkan is next.
They won't, there is no benefit to killing it at all - it's compute is weaker in feature set than OpenCL and on the gfx side it already has significant interest from DCC apps seeking cross platform use.
Vulkan is barely 4 years old and well used already - including on at least one project co created by nVidia themselves with Quake2RTX, which currently uses their vendor specific RT extension for Vulkan.
It's here to stay and will replace OpenGL completely in time - the advent of the DXVK layer has also dramatically increased compatibility and speed for many Windows games running on Linux, often games are barely out and already supported by it, something not nearly close to what Wine on its own was managing.
Saying Vulkan will replace OpenGL is like saying WebAssembly will replace Javascript, which is missing the point of what problem they were each designed to solve. Vulkan is harder to use than OpenGL, and much harder to use *well*. The OpenGL runtime actually does a lot of work to help an app make efficient use of the hardware, which a Vulkan-based app will have to do on its own.
So, Vulkan won't, by itself, replace OpenGL. What will probably replace OpenGL is Vulkan + a higher-level library that's functionally-equivalent to OpenGL.
It's not hard to understand why it went off into the weeds. To be truly successful, I think it needs involvement by influential customers, like Apple, Google, or MS (each of whom has gone off and done their own alternative). It seems to me that the OpenCL WG has been vendor-dominated, for most of its life.
Apple basically instigated the creation of OpenCL. They just walked away from it when they realized they had enough clout to order around their suppliers to create and support their own proprietary APIs.
What OpenCL really needs is input from the sort of academics that nVidia has used to prop up CUDA to begin with prior to ML/AI taking over interest in the GPGPU market.
Couple that with a rock solid, performant and compliant software implementation on a performant hardware platform to give out to these developers, both academics and in the general software marketplace.
Huh? What input do you think it needs from academics?
No, what it needs is big, influential customers that can basically force HW vendors to support most or all of it. Also, a couple big customers would drive the focus on key feature sets, instead of various vendors going off and building whatever whatever they believe might attract more users to their hardware.
"Meanwhile a large, green discrete GPU developer may adopt most of OpenCL 2.x, but exclude support for that shared virtual memory, which isn’t very useful for a discrete accelerator."
I'm pretty sure modern versions of CUDA support shared virtual memory natively. Why wouldn't/couldn't they support it in OpenCL? Unless they're doing something specific to NVIDIA and not what OpenCL's meaning for this is.
I have a hunch that AMD's implementation of SVM is much better than Nvidia's. If Nvidia released official support for SVM, benchmarks would show the inferiority of their solution.
if(featureSupported) ... I already hate this. I'm having PTSD from Android development where I kept being told that either a feature was deprecated, or it was too new for the minimum build target, and almost every functionality had to have if-checks to see what version it was, with almost identical code in each block, just using their new idea of what that API should look like
Only thing that is constant is change, sometimes just for changes sake. I have been playing WoW since vanilla, and every expansion they feel the need to revamp everything - so the game I loved for the 1st 3 or 4 expansions is long gone - replaced with something that calls itself WoW.
and Android has been nothing but a huge mess - with the supposed fixes either not showing up or not being what they were sold to be. I despise Apple, Apple products are banned in my household - but when you control the Hardware, Software and OS - you get something that is almost like a console - a guaranteed minimum level of hardware/software compliance.
Cool story, but in a world with the defacto standard CUDA and the new One API being pushed by the 900# gorilla, not sure there will be much in way of even scraps for OpenCL. Maybe a year or two ago, but not now.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
69 Comments
Back to Article
sharathc - Monday, April 27, 2020 - link
Rest in Peace OpenCLmode_13h - Wednesday, April 29, 2020 - link
That would've made more sense, *before* this move.I still think one of the best things for OpenCL would be Raspberry Pi supporting it.
NikosD - Friday, May 1, 2020 - link
The President of Khronos Group is the Vice President of nVidia.Or you could say that the Vice President of nVidia is the President of Khronos Group.
Or you could say RIP OpenCL but long live CUDA
You got the meaning...
mode_13h - Tuesday, May 5, 2020 - link
That's nothing new. Consider that Nvidia didn't appoint themselves to those positions. If the majority of Khronos members felt that Nvidia was derailing the body's activities, you'd think they'd get voted out, right?ProDigit - Monday, April 27, 2020 - link
Shout out to all those who finished reading the entire article!Valantar - Tuesday, April 28, 2020 - link
I made it! Barely.Spunjji - Tuesday, April 28, 2020 - link
Woo! The nerdliest of nerds!wr3zzz - Monday, April 27, 2020 - link
I don't know when it became a trend that a "standard" has features that are optional. It's not a standard if something that says it is but isn't. USB-C compatibility is already a mess and HDMI 2.1 might be next. Are engineering schools nowadays taught by politicians?drexnx - Monday, April 27, 2020 - link
my thoughts exactly, Khronos saw the "success" of the USB-IF group and what an absolute mess 3.0 3.1 gen 1, 3.2 gen 1x1 all being the same port is and figured, hey, let's do that!Deicidium369 - Tuesday, April 28, 2020 - link
No they saw the market closing ranks, and are trying to get a foot in the door before it's completely closed off bu CUDA and One API... nothing like the fiasco of the USB geniuses.Valantar - Tuesday, April 28, 2020 - link
The only issue with USB-A standards is confusing naming. They all support exactly the same thing: data transfer, just at varying speeds, and they are 100% cross-compatible. For most applications any port will work, and even for high bandwidth stuff anything faster than 2.0 is typically fast enough (few devices even saturate a 5Gbps link). The OP here is referring to USB-C and its much more complex implementations: from USB-only at 2.0 speeds on the one hand to TB3 or USB 3.2G2x2 with DP alt mode and PD support on the other - the span in features and functionality is quite huge. Still, this kind of flexibility is essential with modern do-it-all standards - it wouldn't make sense whatsoever if your laptop charger needed to support USB 3.2G2x2 data transfers or DP alt mode, after all, nor does it make sense for a phone to support all of that just to be able to use a type-C port for charging. What makes it confusing is when logical-seeming features are skipped, such as laptops with type-C ports without PD support or DP alt mode support.edzieba - Monday, April 27, 2020 - link
Does your gizmo charging port need to implement HDMI output and Thunderbolt just because it is Type-C? Or should there be an exception for devices that just need to charge using the USB-PD standard? Maybe your keyboard and mouse don't need the full USB 3.1 Gen 2 bandwidth and can get away with just USB 2? Whoops, there's that 'optional standard' in action again!Its nice in theory to demand every Type-C port you come across to be full-featured all-singing-all-dancing with support for every Alt Mode available, but that is not practical in the real world. Yes, manufacturers can do better in actually implementing the damned standardised port labelling and capability reporting (when you plug in a USB device into a port that does not support that mode the USB 2 channel should be used to flag up the feature incompatibility in an obvious "Hi, that port can't support HDMI" message, ideally with the device manufacturer indicating what ports, if any, would/ support that). but that's down to manufacturers dropping the ball in actually implementing the standard rather than the standard itself.
wr3zzz - Monday, April 27, 2020 - link
The problem with "optional" features is that sellers often, if not always, don't label what is lacking.A core and "different" features are OK but a "standard" must have strict rules of labeling. Right now I have to check the technical specs just to see what kind of type-C device I am getting, and 9 out of 10 times a $5 USB-C cable does not have such info.
Deicidium369 - Tuesday, April 28, 2020 - link
Thunderbolt 4 is just that. TB3 + USB4 certified to be 100% standards compliant at the highest spec for each. TB4 itself is just a certification process. With that cert you know that the TB3 is full spec high perf as is the USB4 and common connector. You should not be buying a $5 USB C cable - not need for Monster Cable money, but never cheap out on a cable - you can have the best host adapter in the world and the best removable HD interface and then monkey it up with a POS cable.name99 - Monday, April 27, 2020 - link
RISC-V ! RISC-V ! RISC-V !It's been this was for a while -- even something like JPEG has many parts to it, quite a few of which almost no-one uses. Same for something like WiFi.
The real issue is how "core" the optional functionality is.
One sort of optionality is: most people don't need to do X, but if you do need to do X you do it this ONE way.
A different sort of optionality is: everyone needs to do Y, but you can do it via method A or method B or method C, choose whichever you feel like.
The first is sometimes justifiable; the second is really not helpful.
Deicidium369 - Tuesday, April 28, 2020 - link
NEVER HAPPEN! NEVER HAPPEN! NEVER HAPPEN!You kiddies don't have the historic view that would inform you that RISCV is just the latest in a LONG string of "next greatest thing ever"
WIll be GREAT as a HD controller - but much past that ... ARM is well established in the world, and STILL is a 3rd rate platform for anything other than a cell phone.
There will be nothing dislodging x86-64 anytime soon - has been that way for ages, and ZERO reason to expect a shift - and the few piddly number of desktop PCs Apples sells is irrelevant
Spunjji - Tuesday, April 28, 2020 - link
"ARM... is a 3rd rate platform for anything other than a cell phone"I think you don't know as much about the tech world as you imply you do.
name99 - Tuesday, April 28, 2020 - link
You don't recognize sarcasm!I actually agree with you completely about RISC-V. In part, though not only, because of its ISA mix-and-match approach.
bloodgain - Monday, April 27, 2020 - link
The rather successful MPI standard has had optional features since version 1. It's rather common, in fact, and not detrimental if done right. The point is that if you're going to implement a feature, it should meet a certain specification so that software (or hardware, etc.) designed for it will be portable and reusable. It's questionable whether OpenCL is doing this right, though.bug77 - Tuesday, April 28, 2020 - link
It's the sane approach when the standard covers devices having different capabilities.Of course, leaving everything at the mercy of the implementer seems a bit extreme. I think it's better when standards at least define several profiles so it'e easy to tell what you're getting.
Just imagine working with OpenCL 3.0: I would have to query for each and every feature I want to to use and, on top of that, implement an alternate code path for each unavailable feature. And (despite what the article/Khronos claims) it's not like developers were flocking towards OpenCL as it is.
But this is just my quick assessment, I'm hoping the heads that brought us OpenCL 3.0 knew better.
mode_13h - Friday, May 1, 2020 - link
Wasn't OpenGL practically the original poster child for that?mooninite - Monday, April 27, 2020 - link
They may use NVIDIA's logo on their slides, but NVIDIA would rather you use CUDA. Khronos should concentrate on getting NVIDIA to abandon CUDA if they want to see OpenCL be used.Kjella - Monday, April 27, 2020 - link
Hahaha sure nVidia will give up the core software that sells billions of dollars worth of deep learning hardware. There's not a snowflake's chance in hell of that happening.Railander - Monday, April 27, 2020 - link
HAHAHAHAHAAHAHHAHAHAHAHAHAHedzieba - Monday, April 27, 2020 - link
CUDA built up the GPGPU industry and underpins a very large proportion of code in industry and academia. Even if Nvidia decided they wanted to go OpenCL-only, they'd still have to support CUDA in devices and drivers for a good decade anyway.name99 - Monday, April 27, 2020 - link
Good luck with that.And don't be too harsh on nV, they were fully justified. Too many companies have concluded that standards should be a land grab, a way to force everyone to use their crappy idea because either they have a patent on it, or they already have it working in the lab so can productize it cheaply.
Apple did the same thing, and for the same reason: OpenGL was a dumpster fire and kept getting worse, so Apple, like nV, said "To hell with this", invented their own (covering both GL and CL), and, let's be honest, did a much better job.
Committees may be necessary, but even under the best conditions they have no clarity, no vision, no taste. And often they don't operate under the best conditions...
To be honest, there's value in companies occasionally and very publicly walking away from a spec, just to remind everyone that, no, you cannot pile the garbage indefinitely high and continue to expect everyone to keep eating it, forever and ever.
Deicidium369 - Tuesday, April 28, 2020 - link
OpenGL goes back to Silicon Graphics Iris-GL - so it is the OG of the 3d APIs... There were tons of other GLs - but none of them had the success in the 3D world that at one time Silicon Graphics owned.pjmlp - Tuesday, April 28, 2020 - link
Had it not been for Id Software, their miniGL driver for Doom and how Carmack used to advocate OpenGL and Glide would have certainly won on the PC.Klimax - Tuesday, April 28, 2020 - link
Correction: miniGL driver was for Quake.mode_13h - Wednesday, April 29, 2020 - link
Yeah, don't let facts get in the way of your narrative.> so Apple, like nV, said "To hell with this"
Um, CUDA predated OpenCL. You can see a lot of similarities between the two, where CUDA (and its influences) shaped OpenCL. Except, with OpenCL, they got to clean out some of the cruft that built up in CUDA and make it simpler and more self-consistent.
> Committees may be necessary, but even under the best conditions they have no
> clarity, no vision, no taste. And often they don't operate under the best conditions...
Hmmm... what committee-designed standards might you be using, right now.
* Ethernet
* Wifi
* Bluetooth
* 4G cellphone
* PCIe
* USB
* HDMI
* DisplayPort
* DDR4
* HTTP, HTML, Javascript
* maybe OpenGL or Vulkan?
* XML
* JPEG
* MPEG-2/4, H.264/H.265, eventually AV1
Yeah, you're right. Standards committees don't work. F 'em.
whatthe123 - Monday, April 27, 2020 - link
Nvidia supports cuda and opencl. Nvidia just develops cuda libraries for their cards as a selling point. No company is attempting to push out opencl. AMD and Nvidia were literally part of the collaboration team designing openCL.mode_13h - Wednesday, April 29, 2020 - link
Nvidia doesn't support OpenCL on their Tegra/Jetson platforms. No good technical reason, either.Deicidium369 - Tuesday, April 28, 2020 - link
It is more likely you will abandon your body before Nvidia abandons CUDA - OpenCL is dead.CUDA and One API are the only 2 APIs that matter - and wouldn't be getting Nvidia to abandon CUDA - its the developers who standardized around CUDA a LONG TIME AGO - Even Intel will have some problems cracking that ecosystem.
mode_13h - Wednesday, April 29, 2020 - link
The joke is on you. oneAPI is *built* on OpenCL.mode_13h - Wednesday, April 29, 2020 - link
> Khronos should concentrate on getting NVIDIA to abandon CUDALeaving aside Nvidia's leadership role in Khronos, there's what the article said: "the group can’t force technological change on anyone"
PeachNCream - Monday, April 27, 2020 - link
The solution to no one using OpenCL's current versions is to make everything optional since the last popular version such that OpenCL's latest version looks widely used on paper now that a whole host of things are suddenly compliant. Bona fide brilliance!Deicidium369 - Tuesday, April 28, 2020 - link
more of a weak "we support it" than anything for Nvidia. OpenCL will not have a significant impact. CUDA is entrenched because Nvidia did the work with outreach and support starting a long while back. Only Nvidia was looking forward to GPUs being compute systems - and almost over night the TOP500 wasn't mostly CPU anymore. OpenCL has the disadvantage of 2 of the 3 major accelerator makers have their own API. So OpenCL is a product that doesn't have much of a home, or a home team supporting it.soresu - Monday, May 4, 2020 - link
"Outreach" is a funny name for flinging grants and free hardware at potential developers and academic institutions.Any company can do that if they have the requisite funds to weather it for any length of time.
soresu - Monday, May 4, 2020 - link
As to 2 of the major companies having their own API....Well CUDA obviously. But if the second you mean is oneAPI? Is that a joke? The thing has no place in the market yet - it is still in the process of being built, not to mention it is built atop OpenCL or SYCL.
There's no reason in theory that you couldn't run oneAPI on nVidia or AMD hardware because unlike CUDA it is open source and doesn't require recompilation to work elsewhere.
mode_13h - Tuesday, May 5, 2020 - link
There's a company who's actually porting oneAPI to run atop Nvidia hardware.quorm - Monday, April 27, 2020 - link
If the problem is certain features not being worth implementing on certain platforms, why not split OpenCL into a few different versions that are appropriate for embedded, standard graphics cards, dedicated cloud compute, etc.?There is nothing more frustrating than needing a particular feature a vendor has neglected to implement for no discernible reason, and this basically gives them permission to do so.
Deicidium369 - Tuesday, April 28, 2020 - link
Yeah because what we NEED is more fragmentation... I guess the whole premise of the article didn't sink in.First off, the main vendor is Nvidia - and most CUDA devs are quite happy with what NVidia has provided and then offered support to those devs.
So who are these "vendors" that now have permission? Nvidia - CUDA. Intel - One API. Maybe AMD - but like it says it's OpenCL is a mess (surprise, AMD software not upto snuff)... So the market leader Nvidia is being joined by the 900# gorilla in this space - so 90%+ of the market is covered by these 2 APIs.
soresu - Monday, May 4, 2020 - link
No one is happy with vendor lock in - it isn't a very good idea from a developer stand point.I guarantee you that if they could shift their implementations with ease to something that had the feature + performance parity while having hardware agnostic capability they would do it.
The main problem with vendor lock in is that it allows said vendor to monopolise the market, and therefore set the prices as they see fit. The ridiculous cost of nVidia pro GPU's is the result of that lock in - out of the students in my 3D animation class not a single one was willing to shell out for RTX hardware to boost rendering speed vs buying an AMD Threadripper to increase CPU rendering.
mode_13h - Tuesday, May 5, 2020 - link
It's not only CUDA-dependence, though. Nvidia could also charge a premium because their hardware was consistently far ahead of anyone else, in AI performance.However, Habana Labs, recently bought by Intel, posted up some benchmarks that totally dominated Nvidia. So, that could be starting to change. Cerebrus is another one to watch.
mode_13h - Wednesday, April 29, 2020 - link
OpenCL already has an embedded profile.ZolaIII - Monday, April 27, 2020 - link
From mess to even bigger mess, seams as graveyard for OpenCL.Deicidium369 - Tuesday, April 28, 2020 - link
It's just an API for like 5% of the market... Even Apple has moved on.At least the Vulkan and VulkanRT are going strong and getting stronger - same Kronos group. OpenGL and DirectX are viable targets for a 3rd player - and that's where Vulkan fits in ... OpenCL doesn't have the same favorable market segment.
pjmlp - Tuesday, April 28, 2020 - link
It remains to be seen how strong Vulkan will be.Apple, Sony and Microsoft are not supporting it. On Windows it is only supported on classical Win32, sandboxed Win32 and UWP don't allow the ICD driver model used by OpenGL/Vulkan drivers.
On Switch it is available, but a large majority of game developers go with NVN or middleware.
On Android, it is only a required API as of Android 10, between Android 7 and 10 only a couple of Samsung and Google flagship phones actually had proper support for it, hence why Google made it compulsory as of Android 10, given that adoption was going nowhere.
OctaneRender one of the biggest rendering engines used in Hollywood VFX just annouced that they decided to give up on the Vulkan backend and are going with Optix 7 (CUDA based) instead.
mode_13h - Wednesday, April 29, 2020 - link
Also, I don't foresee Vulkan being embraced by FPGA vendors the same way they went for OpenCL.soresu - Monday, May 4, 2020 - link
Clearly you didnt read the article very well - you can implement Vulkan on top of Metal through MoltenVK, there is nothing special about Metal that makes it superior to Vulkan, Apple are just being their usual pedantic walled garden selves to no benefit of the consumer, and especially not to the devs that have to wade through their continuing stream of refuse. The only light at the end of the tunnel is that they are not blocking use of MoltenVK, they know full well that devs may abandon them if they do so.As to Sony and Microsoft - Sony have always used their own proprietary API's in Playstation development, so that is not a surprise.
On the MS side they do support Vulkan on x86 Windows, which is not an inconsiderable market place - the UWP marketplace is all but useless to most gamers on Windows, even MS have begun to consider abandoning it altogether by merging UWP features into Win32.
For Windows on ARM they are planning to support OGL and OpenCL through DX12, and likely will do so for Vulkan too, as shown above in the table showing gfx-rs in that position.
As for OctaneRender - Otoy have a history of making promises to move away from nVidia and breaking them, it's a basic business practice they call "driving up the price", do you seriously think they don't get a handsome deal from nVidia for keeping it that way? Otoy were just scaring nVidia into driving up the compensation in that deal, they have as much integrity as my rear end after a strong curry.
mode_13h - Tuesday, May 5, 2020 - link
I think the main reason Apple doesn't ban MoltenVK is because they know it incurs a performance penalty. In that sense, it doesn't eliminate the incentive to develop Metal-native apps.mode_13h - Wednesday, May 6, 2020 - link
I think you're wrong about MS supporting Vulkan. I'm 99% certain the Vulkan support on Windows is entirely supplied by the GPU vendors, themselves. It's just that MS doesn't block them from doing so, the way that Apple does on Macs. On Macs, OpenGL is deprecated and the only other graphics API that a driver is allowed to natively support is Metal.Geranium - Monday, April 27, 2020 - link
This is what happens when you put competing company's employee a Head of your organisation. Nvidia is very good at destroying their competition and Khronos put Nvidia's employee as head of OpenCL project. Good, very good.OpenCL now irrelevant, so Vulkan is next.
mode_13h - Wednesday, April 29, 2020 - link
Why would they kill Vulkan?soresu - Monday, May 4, 2020 - link
They won't, there is no benefit to killing it at all - it's compute is weaker in feature set than OpenCL and on the gfx side it already has significant interest from DCC apps seeking cross platform use.soresu - Monday, May 4, 2020 - link
Vulkan is barely 4 years old and well used already - including on at least one project co created by nVidia themselves with Quake2RTX, which currently uses their vendor specific RT extension for Vulkan.It's here to stay and will replace OpenGL completely in time - the advent of the DXVK layer has also dramatically increased compatibility and speed for many Windows games running on Linux, often games are barely out and already supported by it, something not nearly close to what Wine on its own was managing.
mode_13h - Tuesday, May 5, 2020 - link
Saying Vulkan will replace OpenGL is like saying WebAssembly will replace Javascript, which is missing the point of what problem they were each designed to solve. Vulkan is harder to use than OpenGL, and much harder to use *well*. The OpenGL runtime actually does a lot of work to help an app make efficient use of the hardware, which a Vulkan-based app will have to do on its own.So, Vulkan won't, by itself, replace OpenGL. What will probably replace OpenGL is Vulkan + a higher-level library that's functionally-equivalent to OpenGL.
anonomouse - Monday, April 27, 2020 - link
Fascinating study in politics this is: a consortium of companies manages to create a standard that nobody in the consortium likes.mode_13h - Wednesday, April 29, 2020 - link
It's not hard to understand why it went off into the weeds. To be truly successful, I think it needs involvement by influential customers, like Apple, Google, or MS (each of whom has gone off and done their own alternative). It seems to me that the OpenCL WG has been vendor-dominated, for most of its life.soresu - Monday, May 4, 2020 - link
Apple do not work with others full stop - MS are slowly getting better at it, this may be due to their new CEO mellowing the waters over there.As to Google, they are happy with Vulkan which has compute if not the full sauce that hardcore OpenCL has.
mode_13h - Tuesday, May 5, 2020 - link
Apple basically instigated the creation of OpenCL. They just walked away from it when they realized they had enough clout to order around their suppliers to create and support their own proprietary APIs.soresu - Monday, May 4, 2020 - link
What OpenCL really needs is input from the sort of academics that nVidia has used to prop up CUDA to begin with prior to ML/AI taking over interest in the GPGPU market.Couple that with a rock solid, performant and compliant software implementation on a performant hardware platform to give out to these developers, both academics and in the general software marketplace.
mode_13h - Tuesday, May 5, 2020 - link
Huh? What input do you think it needs from academics?No, what it needs is big, influential customers that can basically force HW vendors to support most or all of it. Also, a couple big customers would drive the focus on key feature sets, instead of various vendors going off and building whatever whatever they believe might attract more users to their hardware.
bloodgain - Monday, April 27, 2020 - link
"Meanwhile a large, green discrete GPU developer may adopt most of OpenCL 2.x, but exclude support for that shared virtual memory, which isn’t very useful for a discrete accelerator."I'm pretty sure modern versions of CUDA support shared virtual memory natively. Why wouldn't/couldn't they support it in OpenCL? Unless they're doing something specific to NVIDIA and not what OpenCL's meaning for this is.
mode_13h - Wednesday, April 29, 2020 - link
I have a hunch that AMD's implementation of SVM is much better than Nvidia's. If Nvidia released official support for SVM, benchmarks would show the inferiority of their solution.AndrewJacksonZA - Monday, April 27, 2020 - link
What a mess.casperes1996 - Monday, April 27, 2020 - link
if(featureSupported)... I already hate this. I'm having PTSD from Android development where I kept being told that either a feature was deprecated, or it was too new for the minimum build target, and almost every functionality had to have if-checks to see what version it was, with almost identical code in each block, just using their new idea of what that API should look like
Deicidium369 - Tuesday, April 28, 2020 - link
Only thing that is constant is change, sometimes just for changes sake. I have been playing WoW since vanilla, and every expansion they feel the need to revamp everything - so the game I loved for the 1st 3 or 4 expansions is long gone - replaced with something that calls itself WoW.and Android has been nothing but a huge mess - with the supposed fixes either not showing up or not being what they were sold to be. I despise Apple, Apple products are banned in my household - but when you control the Hardware, Software and OS - you get something that is almost like a console - a guaranteed minimum level of hardware/software compliance.
Deicidium369 - Tuesday, April 28, 2020 - link
Cool story, but in a world with the defacto standard CUDA and the new One API being pushed by the 900# gorilla, not sure there will be much in way of even scraps for OpenCL. Maybe a year or two ago, but not now.