As far as I understand, mesh shaders allow you to generate arbitrary geometry on the GPU. That wasn't possible with the traditional vertex pipeline, which only allowed specialized mesh transformations like tesselation.
For example, hair meshes (lots of small strands) are usually generated on the CPU from some basic parameters (basic hairstyle shape, hair color, strand density, curliness, fuzziness etc) and then the generated mesh (which could be quite large) is loaded onto the GPU. But the GPU could do that itself using mesh shaders, saving a lot of memory bandwidth. Here is a paper about this idea: https://www.cemyuksel.com/research/hairmesh_rendering/Real-T...
However, the main application of mesh shaders currently is more restricted: Meshes are chunked into patches (meshlets), which allows for more fine grained occlusion culling of occluded geometry.
Though most these things, I believe, can already be done with compute shaders, although perhaps not as elegantly, or with some overhead.