I've always wondered to what extent these culling techniques still work with raytracing? A reflective surface can bring a bunch of otherwise-offscreen things into the scene. Its what makes screen-space reflections look so bad sometimes, they can't reflect whats not on-screen.
I'm remember a Tiny Glade talk where they explained that they did reflections by first doing a screen-space pass, and then a ray-tracing pass for all the pixels that didn't get a "hit" in screen space (as in, the reflection needed to show something offscreen in that pixel).
This and with reflection you have tiny different representation of your world. RTX has TLAS and BLAS for ray traversing, your own tracing can be based on own BVH acceleration structure and SDF form of the world. So you right, you can't properly cull the world but you can have optimized version by having 1. Nice acceleration structure (hardware TLAS BLAS or software BVH/Octree (iirc octree cache-unfriendly)) 2. Simple material form representation to sample 3. Short rays 4. Initial simplified rays result like screen space reflection 5. Half screen (less rays) and temporal accumulation (even more less rays!)