No, but yes? OmniCoder 9B at Q6 fits on my 9070 XT with 200k+ tokens of context, and it works pretty well with OpenCode. It is for sure the best local model that I've managed to squeeze onto my GPU, and it even works at 120k context at Q3 on an 8GB RX 580 GPU.

I can't imagine trying to using this model on either GPU for real work. I can use much bigger and faster models on the $3 Chutes subscription or $10 OpenCode Go subscription.

Even so, I am still excited. I don't feel like there was even a model worth using with a tool like OpenCode 6 to 9 months ago. I like the way things are heading, and I am looking forward to seeing how capable coding models of this size are in another 6 to 9 months!

You can cram absurd context into a card now, but none of that matter once you hit the VRAM wall and the whole thing slows to a crawl. Cloud is cheaper. Local still matters for privacy and weird adapter stuff, but 'usable for work' is a much higher bar than 'looks decent on benchmarks' when the task is chewing through a repo without latency going to hell.