The readme (and probably most of the project) is likely generated by an LLM - chances are we'll learn more reading the prompts than the readme.
I actually tried this few days back before the Claude Code EULA reinforcement, I went through the same thing.
1. I honestly had a hard time parsing what this is supposed to do or provide over standard opencode setup from the readme. It is rather long-winded and have a lot of bombastic claims but doesnt really explain what it does.
2. Regardless, the claims are pretty enticing. Because I was in experiment mode, and I already had a VM running to try out some other stuff, I gave it a try
3. From what I can tell, its basically a set of configs and plugins to make opencode behave a certain way. Kinda like how lazyvim/astronvim are to neovim.
4. But for all its claims, it had a lot of issues - the setups are rather brittle and was hard to get working out of the box (this is from someone who is pretty comfortable tinkering with vim configs), when I managed to get it working (at least I think its working), its kinda meh? It uses up way more tokens than the default opencode, for worse (or at less consistent) results.
5, FWIW, I dont find the multi/sub-agent workflow to be all that useful for most tasks, or at the very least its still very early IMO, kinda like the function calling phase of chatgpt to really be useful.
6. I was actually able to grok most of Steve Yegge's gastown post from the other day. He made-up a lot of terms that I think made things even more confusing, but I was able to recognize many of the concepts as things that I also had thought of them in a "it would be cool if we can do X/Y/Z" manner. Not with this project.
TBH, at this point im not sure if I'm using it wrong or am I missing something, or this is just how people market their projects in the age of LLM.
edit: what I tried the other day was the code-yeongyu/oh-my-opencode, not this (fork?) project
Re point 5, the simplest argument in favor of sub-agent workflows it that it allows the main agent context to remain free of a large amount of task-specific working context. This lets the main context survive longer before you need compaction. Compaction in CC is a major loss of context IME. Context compaction is generally the point where I reset the conversation as the compacted conversation is practically as bad as a new one but has a bunch of wasted space already.
How I wish we could just see and patch up the raw context before it goes out. If I could hand edit a compaction it would result in better execution going forward and better for my own mental model. It’s such a small feature, but Anthropic would never give it to us.
Thanks for pointing me to the Gas Town blog post. That was...a lot. I'm going to need a lot of time to digest everything that was in there.