A better question is why there is no stronger push for a nicer GPU language that's not tied to any particular GPU and serves any purpose of GPU usage (whether it's compute or graphics).

I mean efforts like rust-gpu: https://github.com/Rust-GPU/rust-gpu/

Combine such language with Vulkan (using Rust as well) and why would you need CUDA?

Mojo might be what you are looking for: https://docs.modular.com/mojo/manual/gpu/intro-tutorial/

The language is general, but the current focus is really on programming GPUs.

I think Intel Fortran has some ability to offload to their GPUs now. And Nvidia has some stuff to run(?) CUDA from Fortran.

Probably just needs a couple short decades of refinement…

One of the reasons CUDA won over OpenCL, was that NVidia, contrary to Khronos, saw a value in helping those HPC researchers move their Fortran code into the GPU.

Hence they bought PGI, and improved their compiler.

Intel eventually did the same with Open API (which isn't plain OpenCL, rather an extension with Intel goodies).

I was on a Khronos webminar where the panel showed disbelief why anyone would care about Fortran, oh well.

It's insane how big the NVidia dev kit is. They've got a library for everything. It seems like they have as broad software support as possible.

That’s actually pretty surprising to me. Of course, there are always jokes about Fortran being some language that people don’t realize is still kicking. But I’d expect a standards group that is at least parallel computing adjacent to know that it is still around.

Yet not only they joked about Fortran, it took CUDA adoption success, for them to take C++ seriously and come up with SPIR as counterpoint to PTX.

Which in the end was worthless because both Intel and AMD botched all OpenCL 2.x efforts.

Hence why OpenCL 3.0 is basically OpenCL 1.0 rebranded, and SYSCL went its own way.

It took a commercial company, Codeaplay a former compiler vendor for games consoles, to actually come up with a good tooling for SYSCL.

Which Intel in the middle of extending SYSCL with their Data Paralell C++, eventually acquired.

Those products are in the foundation of One API, and naturally go beyond what barebones OpenCL happens to be.

The mismanagement Khronos has done with OpenCL is one of the reasons Apple lost ties with Khronos.

I like Julia for this. Pretty language, layered on LLVM like most things. Modular are doing interesting things with Mojo too. People seem to like cuda though.

CUDA is just DOA as a nice language being Nvidia only (not counting efforts like ZLUDA).

That's a compiler problem. Once could start from clang -xcuda and hack onwards. Or work in the intersection of CUDA and HIP which is relatively broad if a bit of a porting nuisance.

May be, but who is working on that compiler? And the whole ecosystem is controlled by a nasty company. You don't want to deal with that.

Besides, I'd say Rust is a nicer language than CUDA dialects.

Chris and Nick originally, a few more of us these days. Spectral compute. We might have a nicer world if people had backed opencl instead of cuda but whatever. Likewise rust has a serious edge over c++. But to the compiler hacker, this is all obfuscated SSA form anyway, it's hard to get too emotional about the variations.

Until Rust gets into any of industry compute standards, being a nicer language alone doesn't help.

Khronos standards, CUDA, ROCm, One API, Metal, none of them has Rust on their sights.

World did not back OpenCL, because it was stuck on a primitive C99 text based tooling, without an ecosystem.

Also Google decided to push their Renderscript C99 dialect instead, while Intel and AMD were busy delivering janky tools and broken drivers.

That's simply not true, because standard level should operate on the IR level, not on the language. You have to generate some IR from your language, at that level it makes sense to talk about standards. The only exception is probably WebGPU where Apple pushed using a fixed language instead of IR which is was a limiting idea.

None of those standards are about IR.

Also SPIR worked so great for OpenCL 2.x, that Khronos rebooted the whole mess back to OpenCL 1.x with OpenCL 3.0 rebranding.

They are pretty much about IR when it comes to language interchange. SPIR-V is explicitly an IR that can be targeted from a lot of different languages.

And so far not much has been happening, hence Shader Languages at Vulkanised 2026.

https://www.khronos.org/events/shading-languages-symposium-2...

These kind of projects is exactly it's happening.

Language would matter more for those who actually would want to write some programs in it. So I'd say rust-gpu is something that should get more backing.

Tooling and ecosystem, that is why.

Rust has great tooling and ecosystem. The point here is more of interest of those who want better alternatives to CUDA. AMD would be an obvious beneficiary to back the above, so I'm surprised about some lack of interest from their likes.

It has zero CUDA tooling, that is what is relevant when positioning itself as alternative to C, C++, Fortran, Python JIT, PTX based compilers, compute libraries, Visual Studio and Eclipse integration, graphical debugger.

Cross compiling Rust into PTX is not enough to make researchers leave CUDA.

And CUDA has zero non CUDA tooling. That's a pointless circular argument which doesn't mean anything. Rust has Rust tooling and it's very good.

Being language agnostic is also not the task of the language, but task of IR. There is already a bunch of languages, such as Slang. The point is to use Rust itself for it.

Where is the graphical debugging experience for Rust, given that it so great tooling?

Slang belongs to NVidia, and was nicely given to Khronos, because almost everyone started relying on HLSL, given that Khronos decided not to spend any additional resources on GLSL.

Just like with Mantle and Vulkan, Khronos seems that without external help they aren't able to produce anything meaningful since Long Peak days.