My aha moment into this direction happened when i got XR glasses and tried embracing voice agents to see where this leads. Did not expect the HUD aspect be so cool of how this setup works. At first i tried using the glasses as a replacement for my main screen and using voice agents as a replacement to the sidebars and chat windows we are used to, which worked ok but then i started using the main screen and main agent interfaces again but kept the glasses with a second screen on a different visual focal plane connected to the voice agents. Dark mode gets a totally new importance there because black is transparent in XR which is important to work well without obscuring the main screen. I can switch between screens by shifting eye focus and have a sense what the other screen is doing a far better than just how a physical second screen is in the visual periphery. When i need to combine images more i move the focal planes closer together so i can see both screens better at the same time. The voice agents can answer side questions, start research that is needed for the next step of the main workflow or fix minor issues that are not important enough to interrupt the main workflow. Its so obvious how HUD, voice ai, XR and agents can grow together into this new computing environment but i am afraid what happens once android and ios shape that reality. I want this to be part of the web.

What glasses are you using?

viture pro xr but connected just as external display to my macbook. the experimental hud software is build from scratch for my usecase using web presentation api and a custom svelte framework that does stereoscopic rendering with css matrix transformations.