I just finished an LLM enabled shell: https://yoshell.ai/
And now I’m thinking about ways to make it even better
It’s rad already though. I’m super proud of it
I just finished an LLM enabled shell: https://yoshell.ai/
And now I’m thinking about ways to make it even better
It’s rad already though. I’m super proud of it
This looks great. Could use built in safety from destructive actions by LLM?
That’s exactly how it’s designed. The flow is:
- you type a command that starts with yo
- the clanker thinks
- the clanker comes back with a shell command that fits your yo command and fills it in as if you had retrieved it from your shell history by pressing the up arrow
- you have to press enter to actually execute the command. Or you could edit the command just like you can edit commands retrieved from your shell history.
I personally find this approval flow to spark more joy than what the other agent TUIs and CLIs do - they usually pop a modal menu dialog with yes/no/something else. And that’s jarring, because modality is a jarring UX. What yosh does feels groovy because it so so much like just retrieving something from history, or like a speedrun of opening a browser, asking Google or a clanker, and copy pasting.