This article already feels like it’s on the right track. DirectX 11 was perfectly fine, and DirectX 12 is great if you really want total control over the hardware but I even remember some IHV saying that this level of control isn’t always a good thing.

When you look at the DirectX 12 documentation and best-practice guides, you’re constantly warned that certain techniques may perform well on one GPU but poorly on another, and vice versa. That alone shows how fragile this approach can be.

Which makes sense: GPU hardware keeps evolving and has become incredibly complex. Maybe graphics APIs should actually move further up the abstraction ladder again, to a point where you mainly upload models, textures, and a high-level description of what the scene and objects are supposed to do and how they relate to each other. The hardware (and its driver) could then decide what’s optimal and how to turn that into pixels on the screen.

Yes, game engines and (to some extent) RHIs already do this, but having such an approach as a standardized, optional graphics API would be interesting. It would allow GPU vendors to adapt their drivers closely to their hardware, because they arguably know best what their hardware can do and how to do it efficiently.

> but I even remember some IHV saying that this level of control isn’t always a good thing.

Because that control is only as good as you can master it, and not all game developers do well on that front. Just check out enhanced barriers in DX12 and all of the rules around them as an example. You almost need to train as a lawyer to digest that clusterfuck.

> The hardware (and its driver) could then decide what’s optimal and how to turn that into pixels on the screen.

We should go in the other direction: have a goddamn ISA you can target across architectures, like an x86 for GPUs (though ideally not that encumbered by licenses), and let people write code against it. Get rid of all the proprietary driver stack while you're at it.

The problem with DX12/Vulkan isn’t just that “low-level control is hard”, it’s that a lot of performance-critical decisions are now exposed at a level where they’re extremely GPU- and generation-specific. The same synchronization strategy, command ordering, or memory usage can work great on one GPU and badly on another.

A GPU ISA wouldn’t fix that, it would push even more of those decisions onto the developer.

An ISA only really helps if the underlying execution and memory model is reasonably stable and uniform. That’s true for CPUs, which is why x86 works. GPUs are the opposite: different wave sizes, scheduling models, cache behavior, tiling, memory hierarchies, and those things change all the time. If a GPU ISA is abstract enough to survive that, it’s no longer a useful performance target. If it’s concrete enough to matter for performance, it becomes brittle and quickly outdated.

DX12 already moved the abstraction line downward. A GPU ISA would move it even further down. The issues being discussed here are largely a consequence of that shift, not something solved by continuing it.

What the blog post is really arguing for is the opposite direction: higher-level, more declarative APIs, where you describe what you want rendered and let the driver/hardware decide how to execute it efficiently on a given GPU. That’s exactly what drivers are good at, and it’s what made older APIs more robust across vendors in the first place.

So while a GPU ISA is an interesting idea in general, it doesn’t really address the problem being discussed here.

But the driver can't decide how to execute efficiently more than the application does, that's how we got the modern APIs. The declarative API would necessarily have to tackle very specific use cases, which again is what the older APIs did.

So I guess we're stuck with what exists today for a while.