Hi HN, author here. SHARP is Apple's recent single-image 3D Gaussian splatting model (https://arxiv.org/abs/2512.10685). Their reference code is PyTorch + a pretty heavy pipeline; I wanted to see if it could run in a browser with no server hop, so I exported the predictor to ONNX and ran it via onnxruntime-web with the WebGPU EP.

What works: drop in an image, get a .ply you can download or preview live, all on your machine — your image never leaves the tab. The model is large (~2.4 GB sidecar) so first load is slow on a cold cache, but inference itself is a few seconds on a recent Mac.

Caveats: SHARP's released weights are research-use only (Apple's model license, not the code's). I host the exported ONNX on R2 so thedemo "just works", but you can also export your own from the upstream Apple repo and upload locally.

Happy to talk about it in the comments :)

I vibecoded a simple web app using Sharp that allowed be to quickly browse any local image folder and view them as "almost" volumetric 3d scenes in a VR headset.

I precomputed and cached each one so it was nearly instant. The effect - although only a crude wrapper around what Sharp already does - was quite transformative and mesmerising. Just the ease of pointing it at any folder of photos and viewing them fully spatially.

It was a bit of a mess code-wise and kinda specific to my local setup - but I should really clean it up deploy it somewhere for other people to try. Although I keep assuming someone else will do it before me and make a better job of it.

Nice, would love to see it, feel free to link it here <3

I would love to try that out, if you ever make it let me know.

My email is in my profile - ping me and I'll be much more likely to remember to do it.

A *2.4gb* ONNX? That is wild. This format continues to impress me. ONNX uses 32bit single precision floats I believe, so thats something like ~644m float params/constants. I recently dove deep 'traditional ML' side of the ONNX serialization format for the purposes of writing an JVM ML compiler for trees and regressions. ONNX actually quite clever the way it serializes trees into parallel arrays (which is then serialized using protobuf). My trees have capped out at < 32mb. I haven't dove into the neural net side of things yet, mainly because I don't have any models to run in prod.(https://github.com/exabrial/petrify if anyone is interested.)

Same, I really like the ONNX format. I only wish that they weren't so frustratingly difficult to use on Apple iOS. Their browser engine, WebKit, has become annoyingly restrictive over the years in terms of the working memory cap.

I ran into quite a few out-of-memory iOS safari issues when I was building continuous voice recognition for my blind chess game, so people could play while on the go.

Interesting, what use cases are you using onnx for btw?

So I use a VAD onnx (Silero [1]) to automatically detect when someone is talking, and then it sends the audio into one of the voice recognition libraries.

I originally tried to get away with just Whisper Tiny in the chess game [2], but it performs worse on the kinds of short phrases (knight E4, c takes d5, etc) used to dictate chess notation. Even with hotword-based phrasing and corrections, I found its accuracy on brief inputs noticeably poorer. So I switched over to Sherpa [3] trained on gigaspeech. It’s significantly more accurate, but it also comes with a correspondingly larger memory footprint.

Ideally, I would have used just one engine, but I needed a fallback for iOS devices (especially older ones) which can easily OOM.

[1] - https://github.com/snakers4/silero-vad

[2] - https://shahkur.specr.net

[3] - https://github.com/k2-fsa/sherpa-onnx

Most ONNX files are fp32, but the ONNX format actually allows fp16, int8, etc. as well (see onnx.proto for the full list of dtypes [1] - they even have fp8/fp4 these days!). I ended up switching over to fp16 ONNX models for my own web-based inference project since the quality is ~identical and page loads get 2x faster.

[1] https://github.com/onnx/onnx/blob/main/onnx/onnx.proto#L605

Yeah it's pretty cool what a 2gb NN can do from a single image

I've been poking at running LLMs in the browser. It feels like we're definitely close (<1 year) to seeing real use cases there.

Ubiquity and coverage of devices is what will take longest. Largely dependent on how well we can shrink models with similar performance and how much we can accelerate mobile devices. This feels like it's but further (<3 years?)

Nice, I've also been doing some similarly neat things via ONNX web at https://intabai.dev (caution, just PoC tools atm, only Chrome tested, only some mobile devices work, no filters).

I think all-client-side in-browser AI imagery is becoming very doable and has lots of privacy benefits. However ONNX web leaves a lot to be desired (I had to proto patch many pytorch conversions because things like Conv3D ops had webgpu issues IIRC). I have yet to try Apache TVM webgpu approaches or any others, but I feel if the webgpu space were more invested in, running these models would be even more feasible.

Interesting. Yeah in-browser is not the best, but getting much easier over time!

I don't like that it uses only a single photo. This means it is going to make up a lot of stuff. E.g. if I show it a photo of a poster, then it will make that poster 3D. With only two photos that problem would already be solved.

Yeah I completely agree, but I think this model solves a different problem. AFAIK it's specifically there for the case where you only have one photo, but still need a 3D gaussian splat scene.

I haven't tried that specific case but - are you sure? It does get a lot of stuff right from context. I think it would probably depend how much of the frame, the poster took up.

More reference images from different angles is always going to give more accurate information in 3D. From a single 2D image there is a lot of ambiguity in the context. Several different shapes in 3D can be represented in identical ways in 2D. Additional context like lighting shadows etc helps. But more real signal from more images will always be better

I'm not saying it wouldn't be - because that's obvious.

Maybe, but what is wrong with wanting real depth instead of "made up depth"? One extra photo mostly solves that.

1. There's many use cases where only a single photo is available

2. There are many models similar to Sharp that do accept multiple photos - but Sharp is trying to solve a specific problem. If you have multiple photos - don't use Sharp.

What are the requirements for running this? Chrome throws a whole bunch of "out of memory" errors into the console when I try to execute these. I'm guessing 4GiB of VRAM is not enough?

Ahh, yeah I forgot to mention it. The model is 2.5gb so I assume you'd need at least free 3gb with all the surroudning stuff, with the rest of your system using more ram I'd guess 4gb is way too low - maybe even 8gb would be in some occasions.

I personally tested it on 32gb Apple M2, and it's able to run much heavier stuff.

This is cool. For practitioners, What’s the current state of the art for free form multi picture to splat? The last time I looked at it the pipeline was pretty janky and included a few separate steps.

For multi-photo, the go-to is still the original 3D Gaussian Splatting (Kerbl et al., 2023) — most consumer tools like Polycam, Luma, and Postshot wrap that under the hood.

Did not work in Firefox on Linux, but it runs on Chrome.

Have to admit, I dont get it. I tried it with 3 landscape photos I have and the results were nowhere close to the results in the demo, but that just speaks to the model.

Regardless, its very cool as a browser tech showcase.

Thanks for trying it out! How much ram do you have? Pretty sure it's the only issue that can occur. The quality varies depending on the image too, so it might have been unlucky photos :(

Are there any examples one could view before downloading?

There's nothing to download - you can run everything from your browser, and the photo you upload is not uploaded anywhere, it stays in your browser.

Loading the model crashes my browser tab from memory usage :/

Yeah, I think you need at least 8gb ram unfortunately, but I tested it only on a 32gb M2, so 8gb might also not be enough.

I might create a compressed version of the model, that would work on low-ram machines.

I've worked around lower RAM machines with ONNX web models by first separating .onnx from .onnx_data, and second having scripts that split up the "layers" and shards the run (e.g. https://huggingface.co/cretz/Z-Image-Turbo-ONNX-sharded). Then you can have the runtime only run one at a time. I don't understand the details too deep, but Claude is good at writing scripts to shard onnx protos.

It froze up my computer, had to hard-boot lol

(16MB M1 Macbook, Chrome)

> inference itself is a few seconds on a recent Mac

This is impressive as hell

Very cool demo. It works in about ~9 seconds on my machine.

A few asks if you're going to devote more time to the project: can you make a full orbital camera - it seems to not be able to orbit 360? Also, can you use double click drag to move the camera in non-orbiting mode for view refinement? (Super minor nitpicks - this demo is really cool.)

> Caveats: SHARP's released weights are research-use only (Apple's model license, not the code's).

Nobody should GAF about this. We have all the major players distilling each other in the open. This gives Apple the ability to slap you with lawyers, but in practice you'll often get more done if you just break the rules.

Do you know of any other image-to-splat models? WorldLabs has a few versions of their Marble model, and the Tencent Hunyuan team just released HyWorld as open weights:

https://github.com/Tencent-Hunyuan/HY-World-2.0

HyWorld looks to be SOTA and better than all the other players.

Apple's Sharp is awesome in that it is fast, but it only generates a small depth sample from the image. There are no back faces or splats, so if you move the camera even slightly from the original perspective, you'll see lots of holes.

[deleted]

Why is it so large? Is it the same model used to create 3D effects on iOS lockscreen?

I think it's essentially a transformer, so it just stores a bunch of weights. As a model that's supposed to be able to convert any image to 3d scene, it's pretty nice size actually.

Regarding the ios lockscreen - I believe they are different models. I think Apple use this one to generate those Vision Pro 3d photos though, but I'm not too sure.

No the 3D effects are much more simple for the lock screen…more akin to old-school animation via layers

[deleted]

[flagged]

[dead]

[flagged]

[dead]

[flagged]

[dead]

[dead]