Can anyone detail the use case for gaussian splatting to me? What are we trying to solve, or, where direction are we trying to head towards?

I'm more familiar with traditional 3D graphics, so this new wave of papers around gaussian splatting lies outside my wheelhouse.

Gaussian splatting models the scene as a bunch of normal distributions (fuzzy squished spheres) instead of triangles, then renders those with billboarded triangles. It has advantages (simpler representation, easy to automatically capture from a scan) and disadvantages (not what the hardware is designed for, not watertight). The biggest disadvantage is that most graphics techniques need to be reinvented for it, and it's not clear what the full list of advantages and disadvantages will be until people have done all of those. But that big disadvantage is also a great reason to make tons of papers.

what does "not watertight" mean?

They don't create contiguous surfaces, and GPUs are optimized to deal with sets of triangles that share vertices (a vertex typically being shared by four to six triangles), rather than not shared at all as with this.

"Watertight" is a actially a stronger criterion, which requires not only a contigous surface, but one which encloses a volume without any gaps, but "not watertight" suffices for this.

Interesting thank you!

AFAIK Gaussian Splatting is somehow connected to NeRFs (neural radiance fields), so job of turning multiple 2D images into 3D scene. Actually tried doing something like this recently for drone navigation (using older point cloud methods) but no luck so far.

Can anyone who read this suggest something to use to scan room geometry using camera only in real-time (with access to beefy NVIDIA computer if needed) for drone navigation purposes?

have you tried ORB SLAM v3?

I get the impression the goal is to save 3D environments with baked lighting without having to run raytracing, at a level above explicitly defined meshes with faces covered by 2D textures, which can't represent fog, translucency, reflection glints, etc without a separate lighting pass. Basically trying to get raytracing without doing raytracing.

I would say they’re an attempt to extend the concept of a photograph in to truly 3 dimensions (not a 2D bitmap with a depth layer)