This sounds pretty cool, but can anyone dumb this down for me? Mesh shaders are good because they are more efficient than the general purpose triangle shaders? Or is this something else entirely?

It's essentially a replacement for vertex shaders which map more closely to how GPUs are processing big and complex triangle meshes as small packets of vertices in parallel by doing the job of splitting a complex triangle mesh into such small packets of vertices in an offline asset-pipeline job instead of relying too much on 'hardware magic' like vertex caches.

AFAIK mesh shaders also get rid of (the ever troublesome) geometry shaders and hull shaders, but don't quote me on that :)

By far most traditional triangle rendering use cases should only see minimal performance improvements though, it's very much the definition of 'diminishing returns'.

It's definitely more straightforward and 'elegant' though.

PS: this is a pretty good introduction I think https://gpuopen.com/learn/mesh_shaders/mesh_shaders-from_ver...

Oh, awesome! Yeah, that's a great introduction. Seems like it introduces a new abstraction that allows a single mesh to be mapped to much smaller groups of vertices so you can take advantage of BVHs and stuff like that on a more granular level, right in the shader code. Very cool stuff! Thanks for the info.

Fundamentally, for OpenGL, "getting shaders" meant moving from a fixed, built-in set of graphics effects to giving developers custom control over the graphics pipeline.

Imagine you hired a robot artist to draw.

Before Shaders (The Old Way): The robot had a fixed set of instructions. You could only tell it "draw a red circle here" or "draw a blue square there." You could change the colors and basic shapes, but you couldn't change how it drew them. This was called the fixed-function pipeline.

After Shaders (The New Way): You can now give the robot custom, programmable instructions, or shaders. You can write little programs that tell it exactly how to draw things.

The Two Original Shaders This programmability was primarily split into two types of shaders:

Vertex Shader: This program runs for every single point (vertex) of a 3D model. Its job is to figure out where that point should be positioned on your 2D screen. You could now program custom effects like making a character model jiggle or a flag wave in the wind.

Fragment (or Pixel) Shader: After the shape is positioned, this program runs for every single pixel inside that shape. Its job is to decide the final color of that pixel. This is where you program complex lighting, shadows, reflections, and surface textures like wood grain or rust.

What about bump mapping, where's that done? That's a texture that changes the geometry.

That's usually a job for the fragment shader.

It doesn't change the geometry, it just changes the lighting to give that appearance.

As far as I understand, mesh shaders allow you to generate arbitrary geometry on the GPU. That wasn't possible with the traditional vertex pipeline, which only allowed specialized mesh transformations like tesselation.

For example, hair meshes (lots of small strands) are usually generated on the CPU from some basic parameters (basic hairstyle shape, hair color, strand density, curliness, fuzziness etc) and then the generated mesh (which could be quite large) is loaded onto the GPU. But the GPU could do that itself using mesh shaders, saving a lot of memory bandwidth. Here is a paper about this idea: https://www.cemyuksel.com/research/hairmesh_rendering/Real-T...

However, the main application of mesh shaders currently is more restricted: Meshes are chunked into patches (meshlets), which allows for more fine grained occlusion culling of occluded geometry.

Though most these things, I believe, can already be done with compute shaders, although perhaps not as elegantly, or with some overhead.