I will say, an underrated use case for even small, local LLMs is making command line tools drastically more accessible to laypeople
I now know zero people I don't think should use linux, and people I know seems to run quite a gamut of technical know-how compared to most other technical folks I know
Having an LLM directly and autonomously drive command line tools outside of a strict sandbox sounds like a ticking time bomb.
Thinking tokens: "The files I'm trying to read are missing, I need to figure out why. I see the problem, I accidentally ran rm -rf /home/user. Let me run git restore. No that didn't work. Let me try git reset --hard origin/HEAD. That still didn't work. I should inform the user."
Output: "I was unable to complete the task you requested. Restore /home/user and I will try again"
I tend to set people up with a chat interface, which is pretty good for asking for commands or scripts that the user will then copy into their terminal. Most people I've gotten to try linux do pretty well with just a wiki, but once they run into something they want to do that's kind of idiosyncratic they tend to ask me for help. While I think running models that have access to a shell is dangerous and should be handled carefully, the fact that they've been trained for this use case generally means they're pretty good at shell commands and can give you one a decent chunk of the time. I'm never willing to inject an external dependency controlled by a company into people's computing needs unless they specifically ask for it, so this is usually a lightweight local model specialized in tool use, but not given shell access. This isn't much different from how they'd use search engine for this purpose these days, but if running locally, it can be more fault-tolerant to issues that affect their internet access as well as offering better privacy guarantees, albeit obviously a little less capable