Man, how I wish WebGPU didn't go all-in on legacy Vulkan API model, and instead find a leaner approach to do the same thing. Even Vulkan stopped doing pointless boilerplate like bindings and pipelines. Ditching vertex attrib bindings and going for programmable vertex fetching would have been nice.

WebGPU could have also introduced Cuda's simple launch model for graphics APIs. Instead of all that insane binding boilerplate, just provide the bindings as launch args to the draw call like draw(numTriangles, args), with args being something like draw(numTriangles, {uniformBuffer, positions, uvs, samplers}), depending on whatever the shaders expect.

>Man, how I wish WebGPU didn't go all-in on legacy Vulkan API model

WebGPU doesn't talk to the GPU directly. It requires Vulkan/D3D/Metal underneath to actually implement itself.

>Even Vulkan stopped doing pointless boilerplate like bindings and pipelines.

Vulkan did no such thing. As of today (Vulkan 1.4) they added VK_KHR_dynamic_rendering to core and added the VK_EXT_shader_object extension, which are not required to be supported and must be queried for before using. The former gets rid of render pass objects and framebuffer objects in favor of vkCmdBeginRendering(), and WebGPU already abstracts those two away so you don't see or deal with them. The latter gets rid of monolithic pipeline objects.

Many mobile GPUs still do not support VK_KHR_dynamic_rendering or VK_EXT_shader_object. Even my very own Samsung Galaxy S24 Ultra[1] doesn't support shaderObject.

Vulkan did not get rid of pipeline objects, they added extensions for modern desktop GPUs that didn't need them. Even modern mobile GPUs still need them, and WebGPU isn't going to fragment their API to wall off mobile users.

[1] https://vulkan.gpuinfo.org/displayreport.php?id=44583

> WebGPU doesn't talk to the GPU directly. It requires Vulkan/D3D/Metal underneath to actually implement itself.

So does WebGL and it's doing perfectly fine without pipelines. They were never necessary. Since WebGL can do without pipelines, WebGPU can too. Backends can implement via pipelines, or they can go for the modern route and ignore them.

They are an artificial problem that Vulkan created and WebGPU mistakenly adopted, and which are now being phased out. Some devices may refuse to implement pipeline-free drivers, which is okay. I will happily ignore them. Let's move on into the 21st century without that design mistake, and let legacy devices and companies that refuse to adapt die in dignity. But let's not let them hold back everyone else.

My biggest issues with WebGPU are, yet another shading language, and after 15 years, browser developers don't care one second for debugging tools.

It is either pixel debugging, or trying to replicate in native code for proper tooling.

Ironically, WebGPU was way more powerful about 5 years ago before WGSL was made mandatory. Back then you could just use any Spirv with all sorts of extensions, including stuff like 64bit types and atomics.

Then wgsl came and crippled WebGPU.

My understanding is that pipelines in Vulkan still matter if you target certain GPUs though.

At some point, we need to let legacy hardware go. Also, WebGL did just fine without pipelines, despite being mapped to Vulkan and DirectX code under the hood. Meaning WebGPU could have also worked without pipelines just fine as well. The backends can then map to whatever they want, using modern code paths for modern GPUs.

Quoting things I only heard about, because I don't do enough development in this area, but I recall reading that it impacted performance on pretty much every mobile chip (discounting Apple's because there you go through a completely different API and they got to design the hw together with API).

Among other things, that covers everything running on non-apple, non-nvidia ARM devices, including freshly bought.

After going through a bunch of docs and making sure I had the right reference.

The "legacy" part of Vulkan that everyone on desktop is itching to drop (including popular tutorials) is renderpasses... which remain critical for performance on tiled GPUs where utilization of subpasses means major performance differences (also, major mobile GPUs have considerable differences in command submission which impact that as well)

Also pipelines and bindings. BDA, shader objects and dynamic rendering are just way better than the legacy Vulkan without these features.

> Also, WebGL did just fine without pipelines, despite being mapped to Vulkan and DirectX code under the hood.

...at the cost of creating PSOs at random times which is an expensive operation :/

No longer an issue with dynamic rendering and shader objects. And never was an issue with OpenGL. Static pipelines are an artificial problem that Vulkan imposed for no good reason, and which they reverted in recent years.

That's not at all what dynamic rendering is for. Dynamic rendering avoids creating render pass objects, and does nothing to solve problems with PSOs. We should be glad for the demise of render pass objects, they were truly a failed experiment and weren't even particularly effective at their original goal.

Trying to say pipelines weren't a problem with OpenGL is monumental levels of revisionism. Vulkan (and D3D12, and Metal) didn't invent them for no reason. OpenGL and DirectX drivers spent a substantial amount of effort to hide PSO compilation stutter, because they still had to compile shader bytecode to ISA all the same. They were often not successful and developers had very limited tools to work around the stutter problems.

Often older games would issue dummy draw calls to an off screen render target to force the driver to compile the shader in a loading screen instead of in the middle of your frame. The problem was always hard, you could just ignore it in the older APIs. Pipelines exist to make this explicit.

The mistake Vulkan made was putting too much state in the pipeline, as much of that state is dynamic in modern hardware now. As long as we need to compile shader bytecode to ISA we need some kind of state object to represent the compiled code and APIs to control when that is compiled.

Going entirely back to the granular GL-style state soup would have significant 'usability problems'. It's too easy to accidentially leak incorrect state from a previous draw call.

IMHO a small number of immutable state objects is the best middle ground (similar to D3D11 or Metal, but reshuffled like described in Seb's post).

Not using static pipelines does not imply having to use a global state machine like OpenGL. You could also make an API that uses a struct for rasterizer configs and pass it as an argument to a multi draw call. I would have actually preferred that over all the individual setters in Vulkan's dynamic rendering approach.