Practically, what differentiateS a splat from standard photogrammetry is that it can capture things like reflections, transparency and skies. A standard photogram of (for example) a mirror would confuse the reflection in the mirror for a space behind the mirror. A photogram of a sheet of glass would likewise suffer.
The problem is that any tool or process that converts splats into regular geometry produces plain old geometry and RGB textures, thus loosing its advantage. For this reason splats are (in my opinion) a tool in search of an application. Doubtless some here will disagree.
I've never been quite clear on how Splats encode specular (directional) effects. Are they made to only be visible from a narrow field of view (so you see a different splat for different view angles?) or do they encode the specular stuff internally somehow?
This is a good question. As I understand it, the only material parameters a splat can recognize are color and transparency. Therefore the first of your two options would be the correct one.
You can use spherical harmonics to encode a few coefficients in addition to the base RGB for each splat such that the rendertime view direction can be used to compute an output RGB. A "reflection" in 3DGS isn't a light ray being traced off the surface, but instead a way of saying "when viewed from this angle, the splat may take an object's base color, while from that angle, the splat may be white because the input image had glare"
This ends up being very effective with interpolation between known viewpoints, and hit-or-miss extrapolation beyond known viewpoints.
Because you have source imagery and colors (and therefore specular and reflective details) from different angles you can add a view angle and location based component to the material/color function; so the material is not just f(point in 3d space) it’s f(pt, view loc, view direction). That’s made differentiable and so you get viewpoint dependent colors for ‘free’.