OpenGL in the form of WebGL is living its best life.
It's the only way to ship portable 3D software across the desktop and mobile platforms without platform specific code paths, thanks to the API fragmentation and proprietary platform antics from our beloved vendors.
In some years WebGPU may mature and start gaining parity (webgl took a looooong time to mature), and after that it'll still take more years for applications to switch given older hardware, the software inertia needed to port all the middleware over etc.
There is also the problem that WebGPU doesn't really add much except for compute shaders. Older WebGL apps have hardly any reason to port. Other problem is that WebGPU is even worse outdated than WebGL was at its release. When WebGL was released, it was maybe 5 years outdated. WebGPU somewhat came out in major desktop browsers this year, and by now it's something like 15 years behind the state of the art. OpenGL, which got de facto deprecated more than half a decade ago, is orders of magnitude more powerful with respect to hardware capabilities/features than WebGPU.
This comparison is kind of sloppy, though. OpenGL on the desktop needs to be compared to a concrete WebGPU implementation. While it still lags behind state of the art, `wgpu` has many features on desktop that aren't in the standard. For example, they've started working on mesh shaders too: https://github.com/gfx-rs/wgpu/issues/7197. If you stick to only what's compatible with WebGL2 on the desktop you'd face similar limitations.
I'm of course talking about WebGPU for web browsers, and I'd rather not use a graphics API like wgpu with uncertain support for the latest GPU features. Especially since wgpu went for the same paradigm as Vulkan, so it's not even that much better to use but you sacrifice lots of features. Also Vulkan seems to finally start fixing mistakes like render passes and pipelines, whereas WebGPU (and I guess wgpu?) went all in.
Saying WebGPU “only” adds compute shaders is crazy reductive and misses the point entirely for how valuable an addition this is, from general purpose compute through to simplification of rendering pipelines through compositing passes etc.
In any case it’s not true anyway. WebGPU also does away with the global state driver, which has always been a productivity headache/source of bugs within OpenGL, and gives better abstractions with pipelines and command buffers.
I disagree. Yes, the global state is bad, but pipelines, render passes, and worst of all static bind groups and layouts, are by no means better. Why would I need to create bindGroups and bindGroup layouts for storage buffers? They're buffers and references to them, so let me just do the draw call and pass references to the ssbos as argument, rather than having to first create expensive bindings, with the need to cache them because they are somehow expensive.
Also, compute could have easily been added to WebGL, making WebGL pretty much on-par with WebGPU, just 7 years earlier. It didn't happen because WebGPU was supposed to be a better replacement, which it never became. It just became something different-but-not-better.
If you'd have to do even half of all the completely unnecessary stuff that Vulkan forces you to do in CUDA, CUDA would have never become as popular as it is.
I agree with you in that I think there's a better programming model out there. But using a buffer in a CUDA kernel is the simple case. Descriptors exist to bind general purpose work to fixed function hardware. It's much more complicated when we start talking about texture sampling. CUDA isn't exactly great here either. Kernel launches are more heavyweight than calling draw precisely because they're deferring some things like validation to the call site. Making descriptors explicit is verbose and annoying but it makes resource switching more front of mind, which for workloads primarily using those fixed function resources is a big concern. The ultimate solution here is bindless, but that obviously presents it's own problems for having a nice general purpose API since you need to know all your resources up front. I do think CUDA is probably ideal for many users but there are trade-offs here still.
It didn't happen because of Google, Intel did the work to make it happen.
Although I tend to bash WebGL and WebGPU for what they offer versus existing hardware, lagging a decade behind, they have a very important quality for me.
They are the only set of 3D APIs that have been adopted in the mainstream computing, designed for managed languages, instead of year another thing to be consumed by C.
Technically Metal is also used by a managed language, but it was designed for Objective-C/C++ first, with Swift as official binding.
Microsoft gave up on Managed Direct X and XNA, and even with all the safety talks, Direct X team doesn't care to provide official COM bindings to C#.
Thus that leaves us WebGL and WebGPU for managed languages fans, which even if lagging behind, as PlayCanvas and ShaderToy show, there are enough capabilities on the shader languages that have not yet taken off.
D3D (up to D3D11 at least) is also a "managed" API since it uses refcounting to keep resources alive for as long as they are used by the GPU, there really isn't much difference to garbage collection.
Metal allows to disable refcounted lifetime management when recording commands since it actually adds significant overhead and D3D12 and Vulkan removed it entirely.
Unfortunately WebGPU potentially produces even more garbage than WebGL2, and we'll have yet to see how this turns out. Some drawcall heavy code actually runs faster on WebGL2 than WebGPU which really doesn't look great for a modern 3D API (not mainly because of GC but every little bit of overhead counts).
The point is that those APIs were not designed with anything beyond C and C++ as consumers, and everyone else has to do their due deligence and build language bindings from scratch.
So we end up in an internal cycle that we cannot get rid of.
Metal and Web 3D APIs add other consumer languages in mind, you also see this in how ANARI is being designed.
Yes every little bit of performance counts, but it cannot be that APIs get designed as if everyone is still coding in Assembly, and then it is up to whoever cares to actually build proper high level abstractions on top, that is how we end up with Vulkan.
> but it cannot be that APIs get designed as if everyone is still coding in Assembly
Why not though? In the end an API call is an API call, and everything is compiled down to machine code no matter what the source language is.
FWIW, the high-level "OOP-isms" of the Metal API is also its biggest downside. Even simple create-option "structs" like MTLRenderPassDescriptor are fully lifetime-managed Objective-C objects where every field access is a method call - that's simply unnecessary overkill.
And ironically, the most binding-friendly API for high-level languages might still be OpenGL, since this doesn't have any structs or 'objects with methods', but only plain old function calls with primitive-type parameters and the only usage of pointers is for pointing to unstructured 'bulk data' like vertex-buffer- or texture-content, this maps very well even to entirely un-C-like languages - and the changes that WebGL did to the GL API (for instance adding 'proper' JS objects for textures and buffers) are arguably a step back compared to native GL where those resource objects are just opaque handles.
Because not everyone doing 3D graphics is implementing AAA rendering engines on RTX cards.
The ANARI effort was born exactly because the visualisation industry refusal to adopt Vulkan as is.
Looking at the ANARI spec and SDK it looks pretty much like a typical C API to me, implementing an old-school scene-graph system. What am I missing - e.g. what makes it specifically well suited for non-C languages? :)
If anything it looks more like an admission by Khronos that Vulkan wasn't such a great idea (but a 3D API that's based on scene graphs isn't either, so I'm not sure what's so great about ANARI tbh).
Python is part of ANARI value proposal, and the standard takes this into account.
https://github.com/KhronosGroup/ANARI-SDK/tree/next_release/...
Dumb question, but is there a way to use WebGL for a desktop app without doing Electron stuff?
...OpenGL?