Mainly by having view-dependent (i.e. changes with the camera angle) material reflectance (diffuse colour and specular highlight).
i.e. the colour (and possibly other surface properties) vary depending on their direction, which is (or at least can be) encoded spherically (as spherical harmonics).
The width/size of each point/splat is also not just a radius, it can be anisotropic, and have an orientation in space, so again, it can vary its size depending on orientation when rendered.
It has been mildly amusing watching the AI crowd learn about point clouds though, and use things the VFX industry was using in the early 00s (spherical harmonic encoded materials - we had light-dependent as well for relighting - points with direction and anisotropic widths, etc)...
Mainly by having view-dependent (i.e. changes with the camera angle) material reflectance (diffuse colour and specular highlight).
i.e. the colour (and possibly other surface properties) vary depending on their direction, which is (or at least can be) encoded spherically (as spherical harmonics).
The width/size of each point/splat is also not just a radius, it can be anisotropic, and have an orientation in space, so again, it can vary its size depending on orientation when rendered.
It has been mildly amusing watching the AI crowd learn about point clouds though, and use things the VFX industry was using in the early 00s (spherical harmonic encoded materials - we had light-dependent as well for relighting - points with direction and anisotropic widths, etc)...
> spherical harmonic encoded materials
This in particular has been hilarious for the exact reason you mentioned. For anybody curious, here's a paper from 2008 about this technique:
https://www.ppsloan.org/publications/StupidSH36.pdf
Ah, so 3DGS is a Neural method?
There is a neural method for computing 3DGS from video or a series of photographs. Rendering 3DGS uses no neural networks as far as I know.