Is Minecraft the only thing using OpenGL anymore?

What is the current state of OpenGL, I thought it had faded away?

It's officially deprecated in favor of Vulkan, but it will likely live on for decades to come due to legacy CAD software and a bunch of older games still using it. I don't share the distaste many have for it, it's good to have a cross-platform medium-complexity graphics API for doing the 90% of rendering that isn't cutting-edge AAA gaming.

> It's officially deprecated in favor of Vulkan

Can you provide a reference for this? I work in the GPU driver space (not on either of these apis), but from my understanding Vulkan wasn't meant to replace OpenGL, it was only introduced to give developers the chance at getting lower level in the hardware (still agnostic from the hardware, at least compared to compiling PTX/CUDA or against AMD's PAL directly, many still think they failed.) I would still highly advocate for developers using OpenGL or dx11 if their game/software doesn't need the capabilities of Vulkan or dx12. And even if you did, you might be able to get away with interop and do small parts with the lower api and leave everything else in the higher api.

I will admit I don't like the trend of all the fancy new features only getting introduced into Vulkan and dx12, but I'm not sure how to change that trend.

I think Vulkan was originally called OpenGL Next. Furthermore, Vulkan's verbosity allows for a level of control of the graphics pipeline you simply can't have with OpenGL, on top of having built in support for things like dynamic rendering, bindless descriptors, push constants, etc.

Those are the main reasons IMO why most people say it's deprecated.

I only play with this stuff as a hobbiest. But OpenGL is way more simple than Vulkan I think. Vulkan is really really complicated to get some basic stuff going.

Which is as-designed. Vulkan (and DX12, and Metal) is a much more low-level API, precisely because that's what professional 3D engine developers asked for.

Closer to the hardware, more control, fewer workarounds because the driver is doing something "clever" hidden behind the scenes. The tradeoff is greater complexity.

Mere mortals are supposed to use a game engine, or a scene graph library (e.g. VulkanSceneGraph), or stick with OpenGL for now.

The long-term future for OpenGL is to be implemented on top of Vulkan (specifically the Mesa Zink driver that the blog post author is the main developer of).

> Closer to the hardware

To what hardware? Ancient desktop GPUs vs modern desktop GPUs? Ancient smartphones? Modern smartphones? Consoles? Vulkan is an abstraction of a huge set of diverging hardware architectures.

And a pretty bad one, on my opinion. If you need to make an abstraction due to fundamentally different hardware, then at least make an abstraction that isn't terribly overengineered for little to no gain.

Closer to AMD and mobile hardware. We got abominations like monolithic pipelines and layout transition thanks to the first, and render passes thanks to the latter. Luckily all of these are out or on their way out.

Not really, other than on desktops, because as we all know mobile hardware gets the drivers it gets on release date, and that's it.

Hence why on Android, even with Google nowadays enforcing Vulkan, if you want to deal with a less painful experience in driver bugs, better stick with OpenGL ES, outside Pixel and Samsung phones.

Trying to fit both mobile and desktop in the same API was just a mistake. Even applications that target both desktop and mobile end up having significantly different render paths despite using the same API.

I fully expect it to be split into Vulkan ES sooner or later.

100%. Metal is actually self-described as a high level graphics library for this very reason. I’ve never actually used it on non-Apple hardware, but the abstractions for vendor support is there. And they are definitely abstract. There is no real getting-your-hands-dirty exposure of the underlying hardware

Metal does have to support AMD and Intel GPUs for another year after all, and had to support NVIDIA for a hot minute too.

Wow, what a brain fart. So much of metal has improved since M-series, I just forgot it was even the same framework entirely. Even the stack is different now that we have metal cpp and swift++ interop with unified memory access.

> fewer workarounds because the driver is doing something "clever" hidden behind the scenes.

I would be very surprised if current Vulkan drivers are any different in this regard, and if yes then probably only because Vulkan isn't as popular as D3D for PC games.

Vulkan is in a weird place that it promised a low-level explicit API close to the hardware, but then still doesn't really match any concrete GPU architecture and it still needs to abstract over very different GPU architectures.

At the very least there should have been different APIs for desktop and mobile GPUs (not that the GL vs GLES split was great, but at least that way the requirements for mobile GPUs don't hold back the desktop API).

And then there's the issue that also ruined OpenGL: the vendor extension mess.

> specifically the Mesa Zink driver

https://docs.mesa3d.org/drivers/zink.html

> Can you provide a reference for this?

The last OpenGL release 4.6 was in 2017... I think that speaks for itself ;)

And at least on macOS, OpenGL is officially deprecated, stuck at 4.1 and is also quickly rotting (despite running on top of Metal now - but I don't think anybody at Apple is doing serious maintenance work on their OpenGL implementation).

That's not "OpenGL is officially deprecated".

In the end, if nobody is maintaining the OpenGL standard, implementations and tooling it doesn't matter much whether it is officially deprecated or just abandondend.

.. but people ARE maintaining the implementations and tooling even if the spec might not be getting new features aside from extensions. There's a difference.

Look at Mesa release notes for example, there's a steady stream of driver feature work and bugfixes for GL: https://docs.mesa3d.org/relnotes/25.2.0.html (search for "gl_")

A slow moving graphics API is a good thing for many uses.

People are writing new OpenGL code all the time. See eg HN story sumbmissions: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

> A slow moving graphics API is a good thing for many uses.

It's not slow moving. It's completely frozen. The Mesa guys are the only ones actually fixing bugs and improving implementations, but the spec is completely frozen and unmaintained. Apple, Microsoft and Google don't really care if OpenGL works well on their platforms.

> the spec is completely frozen and unmaintained.

but, literally this article is about something new that was added to the OpenGL spec

Well, not really to the OpenGL spec itself. It's about a new OpenGL extension being added to the extension registry. Vendors may implement it if they wish. AFAIK the core OpenGL spec hasn't been updated in years, so even though new extensions keep getting developed by vendors, the official baseline hasn't moved.

I suppose the same is true of Direct3D 11, though. Only the Direct3D 12 spec has been updated in years from what I can tell. (I'm not a graphics programmer.)

A main reason to do new OpenGL releases was to roll developed extensions to required features of a new OpenGL version to give application programmers a cohesive target platform. The pace of API extensions has slowed down enough that it's not going to be a problem for a while.

> but I don't think anybody at Apple is doing serious maintenance work on their OpenGL implementation

In other words nothing changed. The OpengGL standard had been well past 4.1 for years when Apple released Metal. People working with various 3D tools had to disable system integrity checks to install working drivers from NVIDIA to replace whatever Apple shipped by default.

I've never been able to successfully create a GL context > version 2.1, or invoke the GLSL compiler.

As a sidenote, I've very much enjoyed your blog, and developed a similar handle system as yours around the same time. Mine uses 32 bits though - 15 for index, 1 for misc stuff, 8 for random key, and 8 for object type :^)

Recent versions of macOS will provide either an OpenGL 2.1 context or OpenGL 4.1 context, depending on how you request the context. You have to request a 3.2+ core profile, and not use X11 or the glX* functions.

From macOS 10.7 to 10.9, you'd get an OpenGL 3.2 context. As OpenGL 4.1 is backward compatible to OpenGL 3.2, it's fine that the same code gets OpenGL 4.1 now.

Basically, macOS will provide an "old" API to programs that need it, which is fixed at 2.1, and a "modern" API to programs that know how to ask for it, which has settled at 4.1 and is unlikely to change.

OpenGL 4.1 is harmonised with OpenGL ES 2.0. Almost the same rendering model, features, extensions, etc. On iOS, iPadOS etc you can use OpenGL ES 2.0, and no version of OpenGL (non-ES), so my guess is that's why macOS settled on OpenGL 4.1. Both platforms offer the same OpenGL rendering features, but through slightly different APIs.

But if you request 4.1 over GLX (which uses X11/Xorg/XQuartz), the X11 code only supports OpenGL 2.1. For example, if you're porting some Linux code or other GLX examples over.

Unfortunately, the GLX limitation is probably just due to the Xorg-based XQuartz being open source but only minimally maintained since before OpenGL 3.2 was added to macOS. XQuartz uses Xorg and Mesa, which have all the bindings for 4.1, but some of them are not quite wired up.

The universal narrative around OpenGL is that it's deprecated, so I assumed that came with a thumbs-up from Khronos. In any case, I'm not holding my breath for GL > 4.6.

OpenGL in the form of WebGL is living its best life.

It's the only way to ship portable 3D software across the desktop and mobile platforms without platform specific code paths, thanks to the API fragmentation and proprietary platform antics from our beloved vendors.

In some years WebGPU may mature and start gaining parity (webgl took a looooong time to mature), and after that it'll still take more years for applications to switch given older hardware, the software inertia needed to port all the middleware over etc.

There is also the problem that WebGPU doesn't really add much except for compute shaders. Older WebGL apps have hardly any reason to port. Other problem is that WebGPU is even worse outdated than WebGL was at its release. When WebGL was released, it was maybe 5 years outdated. WebGPU somewhat came out in major desktop browsers this year, and by now it's something like 15 years behind the state of the art. OpenGL, which got de facto deprecated more than half a decade ago, is orders of magnitude more powerful with respect to hardware capabilities/features than WebGPU.

This comparison is kind of sloppy, though. OpenGL on the desktop needs to be compared to a concrete WebGPU implementation. While it still lags behind state of the art, `wgpu` has many features on desktop that aren't in the standard. For example, they've started working on mesh shaders too: https://github.com/gfx-rs/wgpu/issues/7197. If you stick to only what's compatible with WebGL2 on the desktop you'd face similar limitations.

I'm of course talking about WebGPU for web browsers, and I'd rather not use a graphics API like wgpu with uncertain support for the latest GPU features. Especially since wgpu went for the same paradigm as Vulkan, so it's not even that much better to use but you sacrifice lots of features. Also Vulkan seems to finally start fixing mistakes like render passes and pipelines, whereas WebGPU (and I guess wgpu?) went all in.

Saying WebGPU “only” adds compute shaders is crazy reductive and misses the point entirely for how valuable an addition this is, from general purpose compute through to simplification of rendering pipelines through compositing passes etc.

In any case it’s not true anyway. WebGPU also does away with the global state driver, which has always been a productivity headache/source of bugs within OpenGL, and gives better abstractions with pipelines and command buffers.

I disagree. Yes, the global state is bad, but pipelines, render passes, and worst of all static bind groups and layouts, are by no means better. Why would I need to create bindGroups and bindGroup layouts for storage buffers? They're buffers and references to them, so let me just do the draw call and pass references to the ssbos as argument, rather than having to first create expensive bindings, with the need to cache them because they are somehow expensive.

Also, compute could have easily been added to WebGL, making WebGL pretty much on-par with WebGPU, just 7 years earlier. It didn't happen because WebGPU was supposed to be a better replacement, which it never became. It just became something different-but-not-better.

If you'd have to do even half of all the completely unnecessary stuff that Vulkan forces you to do in CUDA, CUDA would have never become as popular as it is.

I agree with you in that I think there's a better programming model out there. But using a buffer in a CUDA kernel is the simple case. Descriptors exist to bind general purpose work to fixed function hardware. It's much more complicated when we start talking about texture sampling. CUDA isn't exactly great here either. Kernel launches are more heavyweight than calling draw precisely because they're deferring some things like validation to the call site. Making descriptors explicit is verbose and annoying but it makes resource switching more front of mind, which for workloads primarily using those fixed function resources is a big concern. The ultimate solution here is bindless, but that obviously presents it's own problems for having a nice general purpose API since you need to know all your resources up front. I do think CUDA is probably ideal for many users but there are trade-offs here still.

It didn't happen because of Google, Intel did the work to make it happen.

Although I tend to bash WebGL and WebGPU for what they offer versus existing hardware, lagging a decade behind, they have a very important quality for me.

They are the only set of 3D APIs that have been adopted in the mainstream computing, designed for managed languages, instead of year another thing to be consumed by C.

Technically Metal is also used by a managed language, but it was designed for Objective-C/C++ first, with Swift as official binding.

Microsoft gave up on Managed Direct X and XNA, and even with all the safety talks, Direct X team doesn't care to provide official COM bindings to C#.

Thus that leaves us WebGL and WebGPU for managed languages fans, which even if lagging behind, as PlayCanvas and ShaderToy show, there are enough capabilities on the shader languages that have not yet taken off.

D3D (up to D3D11 at least) is also a "managed" API since it uses refcounting to keep resources alive for as long as they are used by the GPU, there really isn't much difference to garbage collection.

Metal allows to disable refcounted lifetime management when recording commands since it actually adds significant overhead and D3D12 and Vulkan removed it entirely.

Unfortunately WebGPU potentially produces even more garbage than WebGL2, and we'll have yet to see how this turns out. Some drawcall heavy code actually runs faster on WebGL2 than WebGPU which really doesn't look great for a modern 3D API (not mainly because of GC but every little bit of overhead counts).

The point is that those APIs were not designed with anything beyond C and C++ as consumers, and everyone else has to do their due deligence and build language bindings from scratch.

So we end up in an internal cycle that we cannot get rid of.

Metal and Web 3D APIs add other consumer languages in mind, you also see this in how ANARI is being designed.

Yes every little bit of performance counts, but it cannot be that APIs get designed as if everyone is still coding in Assembly, and then it is up to whoever cares to actually build proper high level abstractions on top, that is how we end up with Vulkan.

> but it cannot be that APIs get designed as if everyone is still coding in Assembly

Why not though? In the end an API call is an API call, and everything is compiled down to machine code no matter what the source language is.

FWIW, the high-level "OOP-isms" of the Metal API is also its biggest downside. Even simple create-option "structs" like MTLRenderPassDescriptor are fully lifetime-managed Objective-C objects where every field access is a method call - that's simply unnecessary overkill.

And ironically, the most binding-friendly API for high-level languages might still be OpenGL, since this doesn't have any structs or 'objects with methods', but only plain old function calls with primitive-type parameters and the only usage of pointers is for pointing to unstructured 'bulk data' like vertex-buffer- or texture-content, this maps very well even to entirely un-C-like languages - and the changes that WebGL did to the GL API (for instance adding 'proper' JS objects for textures and buffers) are arguably a step back compared to native GL where those resource objects are just opaque handles.

Because not everyone doing 3D graphics is implementing AAA rendering engines on RTX cards.

The ANARI effort was born exactly because the visualisation industry refusal to adopt Vulkan as is.

Looking at the ANARI spec and SDK it looks pretty much like a typical C API to me, implementing an old-school scene-graph system. What am I missing - e.g. what makes it specifically well suited for non-C languages? :)

If anything it looks more like an admission by Khronos that Vulkan wasn't such a great idea (but a 3D API that's based on scene graphs isn't either, so I'm not sure what's so great about ANARI tbh).

Python is part of ANARI value proposal, and the standard takes this into account.

https://github.com/KhronosGroup/ANARI-SDK/tree/next_release/...

Dumb question, but is there a way to use WebGL for a desktop app without doing Electron stuff?

...OpenGL?

OpenGL is going to live a long life simply because Vulkan is way more complex and overengineered than it needs to be.

Vulkan (1.0 at least) being a badly designed API doesn't mean that OpenGL will be maintained unfortunately. Work on OpenGL pretty much stopped in 2017.

I am sadly aware, but I won't switch until the complexity is fixed. Although I did kind of switch, but to CUDA because the overengineered complexity of Vulkan drove me away. I'm neither smart nor patient enough for that. What should be a malloc is a PhD thesis in Vulkan, what should be a memcpy is another thesis, and what should be a simple kernel launch is insanity.

> I am sadly aware, but I won't switch until the complexity is fixed

It pretty much is by now if you can use Vulkan 1.4 (or even 1.3). It's a pretty lean and mean API once you've got it bootstrapped.

There's still a lot of setup code to get off the ground (device enumeration, extensions and features, swapchain setup, pipeline layouts), but beyond that Vulkan is much nicer to work with than OpenGL. Just gotta get past the initial hurdle.

It's steadily getting better as they keep walking back aspects which turned out to be needlessly complex, or only needed to be complex for the sake of older hardware that hardly anyone cares about anymore, but yeah there's still a way to go. Those simpler ways of doing things are just grafted onto the side of the existing API surface so just knowing which parts you're supposed to use is a battle in itself. Hopefully they'll eventually do a clean-slate Vulkan 2.0 to tidy up the cruft, but I'm not getting my hopes up.

Might be getting better but just yesterday I dabbled in Vulkan again, digging through the examples from https://github.com/SaschaWillems/Vulkan, and the complexity is pure insanity. What should be a simple malloc ends up being 40 lines of code, what should be a simple memcpy is another 30 lines of code, and what should be a single-line kernel launch is another 50 lines of bindings, layouts, pipelines, etc.

Tbf, a lot of the complexity (also in the official Khronos samples) is caused by insane C++ abstraction layers and 'helper frameworks' on top of the actual Vulkan C API.

Just directly talking to the C API in the tutorials/examples instead of custom wrapper code would be a lot more helpful since you'd don't need to sift through the custom abstraction layers (even if it would be slightly more code).

E.g. have a look at the code snippets in here and weep in despair ;)

https://docs.vulkan.org/tutorial/latest/03_Drawing_a_triangl...

Why should these things be simple? Graphics hardware varies greatly even across generations from the same vendors. Vulkan as an API is trying to offer the most functionality to as much of this hardware as possible. That means you have a lot of dials to tweak.

Trying to obfuscate all the options goes against what Vulkan was created for. Use OpenGL 4.6/WebGPU if you want simplicity.

A simple vkCreateSystemDefaultDevice() function like on Metal instead of requiring hundreds of lines of boilerplate would go a long way to make Vulkan more ergonomic without having to give up a more verbose fallback path for the handful Vulkan applications that need to pick a very specific device (and then probably pick the wrong one on exotic hardware configs).

And the rest of the API is full of similar examples of wasting developer time for the common code path.

Metal is a great example of providing both: a convenient 'beaten path' for 90% of use cases but still offering more verbose fallbacks when flexibility is needed.

Arguably, the original idea to provide a low-level explicit API also didn't quite work. Since GPU architectures are still vastly different (especially across desktop and mobile GPUs), a slightly more abstract API would be able to provide more wiggle room for drivers to implement an API feature more efficiently under the hood, and without requiring users to write different code paths for each GPU vendor.

Metal has the benefit of being developed by Apple for Apple devices. I'd imagine that constraint allows them to simplify code paths in a way Vulkan can't/won't. Again, Metal doesn't have to deal with supporting dozens of different hardware systems like Vulkan does.

Metal also works for external GPUs like NVIDIA or AMD though (not sure how much effort Apple still puts into those use cases, but Metal itself is flexible enough to deal with non-Apple GPUs).

CUDA can be complex if you want, but it offers more powerful functionality as an option that you can choose, rather than mandating maximum complexity right from the start. This is where Vulkan absolutely fails. It makes everything maximum effort, rather than making the common things easy.

I think CUDA and Vulkan are two completely different beasts, so I don't believe this is a good comparison. One is for GPGPU, and the other is a graphics API with compute shaders.

Also, CUDA is targeting a single vendor, whereas Vulkan is targeting as many platforms as possible.

The point still stands: Vulkan chose to go all-in on mandatory maximum complexity, instead of providing less-complex routes for the common cases. Several extensions in recent years have reduced that burden because it was recognized that this is an actual issue, and it demonstrated that less complexity would have been possible right from the start. Still a long way to go, though.

Yes, recent example, the board getting released by Qualcomm after acquiring Arduino.

Between OpenGL ES 3.1 and Vulkan 1.1, I would certainly go with OpenGL ES.

Oh I didn't know the new Arduino board had a GPU. Do we know what kind?

I don't doubt OpenGL will live forever. But Vulkan 1.3/1.4 is not as bad as people make it out to be.

So I've been told so I'm trying to take another look at it. At least the examples at https://github.com/SaschaWillems/Vulkan, which are probably not 1.3/1.4 yet except for the trianglevulkan13 example, are pure insanity. Coming from CUDA, I can't fathom why what should be simple things like malloc, memcpy and kernel launches, end up needing 300x as many lines.

In part, because Vulkan is a graphics API, not a GPGPU framework like CUDA. They're entirely different beasts.

Vulkan is also trying to expose as many options as possible so as to be extensible on as many platforms as possible. Also, Vulkan isn't even trying to make it more complex than it need be--this is just how complex graphics programming is period. The only reasons people think Vulkan/DX12 are overly complicated is because they're used to using APIs where the majority of the heavy lifting comes from the drivers.

No, it is overly complex for modern hardware (unless you use shader objects). Vulkan forces you to statically specify a ton of state that's actually dynamic on modern GPUs. You could cut things down a ton with a new API. Ofc you'd have to require a certain level of hardware support, but imo that will become natural going forward.

Actually, it would be kinda neat to see an API that's fully designed assuming a coherent, cached, shared memory space between device and host. Metal I guess is closest.

> Vulkan forces you to statically specify a ton of state that's actually dynamic on modern GPUs.

Desktop GPUs. Tiling GPUs are still in use on mobile and you can't use the tiling hardware effectively without baking the description into pipelines.

> You could cut things down a ton with a new API.

VK_KHR_dynamic_rendering is what you are looking for

> Actually, it would be kinda neat to see an API that's fully designed assuming a coherent, cached, shared memory space between device and host.

You can just ask for exactly that--even on Vulkan. If you don't want to support computer systems that don't support RBAR, you can do that.

>Ofc you'd have to require a certain level of hardware support

Have you used Vulkan? Specifying required hardware support for your physical device is literally one of the first thing you do when setting up Vulkan.

> In part, because Vulkan is a graphics API, not a GPGPU framework like CUDA. They're entirely different beasts.

Tbf, the distinction between rendering and compute has been disappearing for quite a while now, apart from texture sampling there isn't much reason to have hardware that's dedicated for rendering tasks on GPUs, and when there's hardly any dedicated rendering hardware on GPUs, why still have dedicated rendering APIs?

And, mesh shading in particular is basically "what if we just deleted all that vertex specification crap and made you write a compute shader"

Note that it's not always better. The task shaders are quite hardware specific and it makes sense to ship defaults inside the driver.

Yes, I predict eventually we will be back at software rendering, with the difference that now it will be hardware accelerated due to running on compute hardware.

This is not a statement on the hardware, it's a statement on what the APIs are trying to achieve. In this regard, they are remarkably different.

The point is that a (self-declared) low-level API like Vulkan should just be a thin interface to GPU hardware features. For instance the entire machinery to define a vertex layout in the PSO is pretty much obsolete today, vertex pulling is much more flexible and requires less API surface, and this is just one example of the "disappearing 3D API".

More traditional rendering APIs can then be build on top of such a "compute-first-API", but that shouldn't be the job Khronos.

Except that you also need to have it available on target systems, good luck on Android.

I'm fairly sure Vulkan runs just fine on Android? You won't have access to dynamic rendering, so you'll have to manage renderpasses, but I don't think you're going to have issues running Vulkan on a modern Android device.

Someone should tell Qualcomm then: https://www.qualcomm.com/developer/blog/2024/11/introducing-...

I believe the llama.cpp Vulkan backend is inoperable on Adreno GPUs

That's OpenCL, not OpenGL.

It's super frequently recommended as a starting point for learners because it's high level enough to get something on the screen in ten lines of code but low level enough to teach you the fundamentals of how the rendering pipeline works (even though GL's abstraction is rather anachronistic and differs from how modern GPUs actually work). Vulkan (requiring literally a thousand LoC worth of initialization to render a single triangle) is emphatically not any sort of replacement for that use case (and honestly not for 95% of hobbyist/indie use cases either unless you use a high-level abstraction on top of it).

The worst thing about OpenGL is probably the hilariously non-typesafe C API.

I believe the modern OpenGL replacement would be WebGPU, which is not just made for browsers, and which isn't as low-level as Vulkan or DirectX 12.

I don't think any major platform that ever supported OpenGL or OpenGL ES--including desktops, smartphones/tablets, and web browsers--has actually removed it yet. Apple will probably be the first to pull the plug, but they've only aggressively deprecated it so far.

How exactly is it aggressiv? I'm selling games using OpenGL on iOS, iPadOS, tvOS and macOS, works with all of their latest hardware. I'm not getting a warning or any sign from them that they will remove support.

It was my understanding (which could definitely be wrong) that their OpenGL support is both behind the times--which is impressive since OpenGL has received no major new features AFAIK in the past decade, the topic of this HN post notwithstanding--and won't even get any bugfixes.

The last supported version they ship doesn't support compute, which is a pretty big limitation.