How much computing power would one need to get this working completely local running a half decent llm fine tuned to sound like santa with all tts, stt and the pipecat inbetween?

I started looking into this with a Pi 5. It seemed like it was not quite performant enough. But I'm not an expert with these things and maybe someone else could make it work. We definitely have the technology to pull this off in this form factor. It would just be really expensive (maybe $500) and might also get a little hot.

If I was building it to be 'local only' I would run the inference on a remote host in my house.

Having a microcontroller in the phone is nice because it is WAY less likely to break. I love being able to flash a simple firmware/change things would fighting it too much.

Oh! Also I do all the 'WebRTC/AI dev' in the browser. When I get it working how I like, then do I switch over to doing the microcontroller stuff.

This repo is one possible starting point for tinkering with local agents on macOS. I've got versions of this for NVIDIA platforms but I tend to gravitate to using LLMs that are too big to fit on most NVIDIA consumer cards.

https://github.com/kwindla/macos-local-voice-agents

[deleted]

More than you can physically fit in a phone like that. Many hundreds if not thousands of watts of GPU.

That's not true. You could run such an LLM on a lower end laptop GPU, or a phone GPU. Very low power and low space. This isn't 2023 anymore, a Santa-specific LLM would not be so intensive.

I've done a fair amount of fine-tuning for conversational voice use cases. Smaller models can do a really good job on a few things: routing to bigger models, constrained scenarios (think ordering food items from a specific and known menu), and focused tool use.

But medium-sized and small models never hit that sweet spot between open-ended conversation and reasonably on-the-rails responsiveness to what the user has just said. We don't know yet know how to build models <100B parameters that do that, yet. Seems pretty clear that we'll get there, given the pace of improvement. But we're not there yet.

Now maybe you could argue that a kid is going to be happy with a model that you train to be relatively limited and predictable. And given that kids will talk for hours to a stuffie that doesn't talk back at all, on some level this is a fair point! But you can also argue the other side: kids are the very best open-ended conversationalists in the world. They'll take a conversation anywhere! So giving them an 8B parameter, 4-bit quantized Santa would be a shame.

But on that compute budget it’s gonna sound so stupid. Oh right. Santa.

It's a children's toy, how nuanced does its responses need to be?

I agree. It just took me a while to figure it out. A 3B param LLM would do perfectly well.

I run LLMs and TTS capable of this on my laptop since last year