Hi HN, I'm Sten, one of the creators.

We built this because we wanted 3D experiences without needing a VR headset. The approach: use your webcam to track where you're looking, and adjust the 3D perspective to match.

Demo: https://portality.io/dragoncourtyard/ (Allow camera, move your head left/right)

It creates motion parallax - the same depth cue your brain uses when you look around a real object. Feels like looking through a window instead of at a flat screen.

Known limitations: - Only works for one viewer at a time - Needs decent lighting - Currently WebGL only

We're still figuring out where this is genuinely useful vs just a novelty. Gaming seems promising, also exploring education and product visualization.

Happy to answer questions!

It was definitely useful and appreciated on the "New" Nintendo 3DS XL, which also used a camera to track your eye movements and adjust the divergence accordingly. I hate the fact that Nintendo abandoned this technology because experiencing Ocarina of Time and Star Fox 64 in 3D was world-changing to me.

I'd say I'm not the only one who misses this technology in games, because a used New 3DS XL costs at least $200 on eBay right now, which is more than what I paid new.

I always thought 3D would combine really nicely with a ray traced graphics full of bright colors and reflections, similar to all those ray tracing demos with dozens of glossy marbles.

The 3DS is different, it's using a lenticular screen so your eyes actually see different images! The eye tracking allows it to work even if the position of your eyes changes (ie because you moved your head).

Presumably developers could have combined this with parallax head tracking for an even stronger effect when you move your head (or the console), but as far as I know no one did.

Do you mean combining the eye tracking with using a lenticular screen? What would do you think the use cases would/could be?

Well, there are two different ways (among others) that your brain detects depth:

1. Each eye sees the object from a different angle.

2. Both eyes see the object from a different angle when the object is moved relative to your head.

The 3DS does only #1. TFA does only #2. Presumably if you did both, you could get a much stronger effect.

I think the New 3DS had the hardware to do both in theory, but it probably would have made development and backwards compatibility overly complicated!

Yeah Nintendo 3DS XL was awesome, but even then, you'd have to use that one specific console in order to be able to play that game.

What we're thinking is to enable this technology - as long as there is a camera and a screen - instantly accessible across billions of devices.

That means that the 3D effect would be applicable not only for games built for that specific console - but for any and all games that are already in a 3D environment.

The technology is still alive and well in some genres, particularly flight sims. One common free solution is to run OpenTrack with the included neural net webcam tracker, which plugs into TrackIR-enabled apps and works like a charm.

Samsung recently released a monitor with similar technology, I believe, as a FYI.

I'm surprised there's still a market for non-VR consumer 3D! I remember the post-Avatar rush of 3D-related products that never quite panned out.

I remember the 3D glasses that you could plug into the Sega Master System in the mid-80s. They took what would be interlaced frames and rendered them to different eyes instead (which made the version getting shown on the connected TV pretty trippy too).

And then there was the time travel arcade game (also by Sega) that used a kind of Pepper's Ghost effect to give the appearance of 3D without glasses. That was in the early 90s.

I think the idea of 3D displays keeps resurfacing because there's always a chance that the tech has caught up to people's dreams, and VR displays sure have brought the latency down a lot but even the lightest headsets are still pretty uncomfortable after extended use. Maybe in another few generations... but it will still feel limiting until we have holodeck-style environments IMO.

I wasn't aware of all of those, will check them out - thanks for sharing!

Yes I believe you are right in that the tech is catching up with concepts that seemed futuristic in the past. For example the hardware today supports much more than it would have been able to do, say, 5-10 years ago.

Our hypothesis is that the current solutions out there still require the consumer to buy something, wear something, install something etc. - while we want to build something that becomes instantly accessible across billions of devices without any friction for the actual consumer.

Vr has taken over this market. Get a vr headset you won't be disappointed.

Other than DCS, Skyrim, and that one Star Wars game at Dave and Buster's where you duel Darth Vader, I don't see a lot that sings to me just yet. Granted, I could easily get 2,000 hours out of DCS over the span of a decade just flying every third and fourth generation fighter jet ever made.

Maybe VR doesn't need that many games because the small handful of good ones have so much depth and replay value. I guess I just talked myself into a $700 VR kit and possibly a $700 GPU upgrade, depending on whether or not my RTX 3060 is up to task.

Not sure what vr kit you're looking at, but if it's a 4k headset to push at 90fps you'll want something more like a 3080 or 4070. But if it's lower resolution it won't need quite so much power.

Has it though? And what "market" are you referring to here?

Fully agreed that if you want 100% full 6DOF immersion - go and pay hundreds or even thousands of dollars to wear a heavy and cumbersome headset on your head. We're not disputing that or thinking of competing with that.

What we're saying is that there may be a much larger market consisting of people who are not ready to commit to pay so much money to wear something that will give them motion sickness after 10 minutes.

If you're developing a VR game your market consists of 50 million people around the world who owns a VR headset. That's great. But since you already built the VR world in 3D, you could also open up the market to billions of people who want to play your game but on their own devices.

Admittedly, it won't be the same experience, but it could be a "midpoint". Not everyone can afford and is willing to pay for a VR headset.

Back in college (~2008) we implemented this with a 7 foot tall back-projected screen and a couple of Wii remotes after seeing Johnny Lee’s video. The nice thing with that screen was that you could stand so close to it you couldn’t really see the edges.

We had as many people come test as we could, and we found that 90% of them didn’t get a sense of depth, likely because it lacked stereo-vision cues. It only worked for folks with some form of monocular vision, incl myself, who were used to relying primarily on other cues like parallax.

That's interesting! Did you continue play around with it and take it further?

We did not, no. Just wrote up the report and moved on.

I don't know if you designed it for a specific monitor but, feedback. It tried using it on my M1 Mac.

First thing, there is no loading indicator and it takes too long to start so I though it was broken a few times before I realized I had to wait longer.

Second thing, although it was clearly tracking my head and moving the camera it did not make me feel like I'm looking into a window into a 3d scene behind the monitor.

These kinds of demos have been around before. I don't know why some work better than others.

some others:

https://discourse.threejs.org/t/parallax-effect-using-face-t... https://www.anxious-bored.com/blog/2018/2/25/theparallaxview...

Isn't it because the webcam FOV is unknown, which is needed to estimate distance from pixel (along with face size, but that should be less variant). The three.js demo had the strength parameter that can be used to calibrate. The iphone app is pre-calibrated for the most common devices, I believe.

Thanks for sharing all this feedback!

And we will check out these links, appreciate you sharing it!

I can confirm that is works decently well with a sunny roof window in the background, which is normally enough for people to complain that my face is too dark.

8yo me, who instinctively tried to look behind the display's field of view during intense gaming sessions, would appreciate this feature very much. My belief is that if it shifted the pov to a lesser degree than in the demo, people generally wouldn't notice, but still subconsciously register this as a more immersive experience.

I'm also glad that the web version doesn't try to cook my laptop - good work.

Thanks!

If you click "Menu" and then "Settings" you can play around with e.g. the sensitivity. Ideally we'd automatically optimize the calibration according to for example what device you are using, but that's something we would do a bit more long-term.

Appreciate it!

Very cool!

I can see this quite useful for educational demonstrations of physics situations and mechanical systems (complex gearing, etc.). Also maybe for product simulations/demonstrations in the design phase — take output from CAD files and make a nice little 3D demo.

Maybe have an "inertia(?)" setting that makes it keep turning when you move far enough off center, as if you were continuing to walk around it.

The single-viewer limitation seems obvious and fundamental, and maybe a bit problematic for the above use cases, such as when showing something to a small group of people. One key may be to take steps to ensure it robustly locks onto and follows only one face/set of eyes. It would be jarring to have it locking onto different faces as conditions or positions subtly change.

Good ideas - we've considered these as well actually!

The intertia-idea wouldn't be too difficult to implement, but its usefulness would probably depend on the application area.

Yep exactly. Usually it locks onto one person's face but it can also jump around, so there are still optimizations we can do there - but generally it's supposed to be for one person. If you compare to VR headsets, two people can't wear the same VR headset anyway!

The latency is unfortunately bad enough that it prevents me from getting any depth illusion :( I've seen other implementations where it works really well though.

May I ask what device you are trying it on? In our experience, it can vary depending on what device - something we're looking to improve on further down the line.

Do you remember which other implementations you've seen that worked really well?

The obvious use case would be to replace the clunky head tracking systems which are often used in simulator games.

Systems like trackir, which require dedicated hardware.

You can do this today with OpenTrack: https://github.com/opentrack/opentrack

Also, TrackIR is just an IR webcam, IR leds, and a hat with reflectors. You can DIY the exact same setup easily with OpenTrack, but OpenTrack also has a neural net webcam-only tracker which is, AFAIK, pretty much state of the art. At any rate it works incredibly robustly.

Actually I have already used it to implement the same idea as the post, with the added feature of anaglyph (red/blue) glasses 3D. The way I did it, I put an entire lightfield into a texture and rendered it with a shader. Then I just piped the output of OpenTrack directly into the shader and Robert, c'est votre proverbial oncle. The latency isn't quite up to VR standard (the old term for this is "fishtank VR"), but it's still quite convincing if you don't move your head too fast.

There's already a wide variety of Opentrack plugins that use everything from off the shelf webcams to DIY infrared trackers to an iPhone app and FaceID/AirPods.

Trackir is just a camera with an infrared led.

How about contributing that to Godot?

Definitely! Our current focus is on Unity, because that's what we're most used to, but we'd also build the solution for at least Unreal and Godot as well!

This is fun! But I see it showing me a close-up view (smaller FOV) if I move my head back, and a wider view (larger FOV) if I bring my head closer. This is the opposite to what I'd expect: if I bring my head nearer to the screen, I should see a more detail, closer (narrower FOV).

Your expectation doesn't match real life! Try it with a window. If you bring your head closer to the window, your FOV increases, ie you can see more of the scene outside the window.

But there is no window. There is a box with a dragon in it. When I bring my eyes closer to the box, I expect to see more of the box, and more of the dragon. You can try it with any real box; substitute any appropriate object for the dragon. Pick your favorite 3D environment that features walls and dragons; notice how both grow, covering more of your FOV. This is what I expect to see as I draw closer to the screen.

The screen is the window.

Exactly what Wowfunhappy and jeffhuys are saying!

The demo gives just a blank screen on Android Firefox, Kiwi and Chrome.

It works for me, just needs a time to load.

Ah, thanks! Got it now on a better connection. A loading indicator would ease confusion.

It seems to vary a lot depending on device. It's just a "basic demo" for now, but yes good idea, thanks!

how long on what internet connection? i m on 1min and counting on 50mbit. but maybe it doesnt work on ubuntu 24 + firefox? should be webgl capable though.

ok took around 2min for me to load, then works.

Devtools say 40.23 MB / 13 MB transferred. Even on 1000Mbps it needed a moment. Works for me on Ubuntu 24 and Firefox.

definitely laggy, but works even under low lighting conditions and a camera that is not facing straight forward.

May I ask what device and browser you are using?