What exactly are they self hosting here? Probably not the model, right? So just the harness?
That does sound like the worst of both worlds: You get the dependency and data protection issues of a cloud solution, but you also have to maintain a home server to keep the agent running on?
"maintain a home server" in this case roughly means "park a headless Mac mini (or laptop or RPi) on your desk"
And you can use a local LLM if you want to eliminate the cloud dependency.
You have spend tens of thousands of dollars on hardware to approach the reasoning and tool call levels of SOTA models...so, casually mentioning "just use local LLM" is out of reach for the common man.
That's pretty much how it was in the 90s with computer tech. 10 years later we were watching cat videos on machines that dwarfed the computing power of what used to be servers.
> And you can use a local LLM
That ship has sailed a long time ago. It's of course possible, if you are willing to invest a few thousand dollars extra for the graphics card rig + pay for power.
Wait, why would you still need a home server if the harness (aka, the agent) is hosted in the cloud?
> but you also have to maintain a home server to keep the agent running on
I'm not fascinated by the idea that a lot of people here don't have multiple Mac minis or minisforum or beelink systems running at home. That's been a constant I've seen in tech since the 90s.
Oops, remove 'not'