Just putting it out there that OpenCode lets you edit your system prompt, and choose a model that isn't bonkers expensive.

  {
    "agent": {
      "subagent-coder-mini": {
        "description": "Assign this subagent for small, well-defined tasks performed quickly",
        "mode": "primary",
        "prompt": "{file:./prompts/my-custom-prompt.md}",
        "model": "deepseek-v4-flash"
      }
    }
  }
(I actually think OpenCode UX sucks, but there isn't much else out there that's better. Aider has been virtually abandoned by the one maintainer (no shade intended, it just is what it is); a fork of Aider looks promising but it's not necessarily the experience you want; there's a dozen VSCode plugins but we don't all wanna use VSCode. I expected there'd be way more usable agents out there, but there isn't)

You might like to try some Pi [0]

[0] https://pi.dev/

same, i really dislike opencode's UX. there are a lot of agents harnesses actually. check out terminal bench 2.0 for example. dirac.run seems to be make the rounds earlier

The hashing and other optimizations in Direct seem kind of brilliant in a "it was obvious (once someone already thought of it)" kind of way, but the active avoidance of MCP seems weird when that and agent plugins are by far the easiest ways to reuse skills now.

will using claude via opencode get me banned this week or is that not until next week?

You will not get banned if you use the API. AFAIK you can't use the subscription with other harnesses. That is how I understood it.

OpenAI subscriptions are allowed with OpenCode, Anthropic subscriptions are not

local agentic coding context windows are too small and default opencode tries to scan every file uses up all the context and messes up

local is pipedream at the moment

I’m glad some people get utility out of it though, if this was still 2023-2024 I would mess around and make it work, but corporate policies in enough places have updated to use the leading closed source models and clouds for agentic coding

Deepseek 4 Flash isn't a local model, unless you've got a dozen high-end GPUs running.