This app is cool and it showcases some use cases, but it still undersells what the E2B model can do.
I just made a real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B. I posted it on /r/LocalLLaMA a few hours ago and it's gaining some traction [0]. Here's the repo [1]
I'm running it on a Macbook instead of an iPhone, but based on the benchmark here [2], you should be able to run the same thing on an iPhone 17 Pro.
[0] https://www.reddit.com/r/LocalLLaMA/comments/1sda3r6/realtim...
[1] https://github.com/fikrikarim/parlor
[2] https://huggingface.co/litert-community/gemma-4-E2B-it-liter...
Re-upped here:
Show HN: Real-time AI (audio/video in, voice out) on an M3 Pro with Gemma E2B - https://news.ycombinator.com/item?id=47652007
Oh wow, that's awesome. Thanks a lot, dang!
That's cool! You can add SoulX-FlashHead for real-time AI head animation as well if you want to simulate a teacher.
Thanks for sharing! I'm still torn about it. Sure it'll feel more natural if you have the AI head animation, but I don't want people to get attached to it. I don't want to make the loneliness epidemic even worse.
Parlor is so cool, especially since you’re offering it for free. And a great use case for local LLMs.
Thanks! Although, I can't claim any credit for it. I just spent a day gluing what other people have built. Huge props to the Gemma team for building an amazing model and also an inference engine that's focused for edge devices [0]
[0] https://github.com/google-ai-edge/LiteRT-LM