Yeah. The solution if you want to have your own AI is to put a box online or rent cloud inference, and access it over a browser or a phone app.

We have on-prem AI for my microgrid community, but it’s a nascent effort and we can only run <100b models. At least that size is extremely useful for most stuff, and we have a selection of models to choose from on openAI /ollama compatible API endpoints.