i mean the other obvious answer is to plug in to the other claude code proxies that other model companies have made for you:

https://docs.z.ai/devpack/tool/claude

https://www.cerebras.ai/blog/introducing-cerebras-code

or i guess one of the hosted gpu providers

if you're basically a homelabber and wanted an excuse to run quantized models on your own device go for it but dont lie and mutter under your own tin foil hat that its a realistic replacement

It's definitely a backup solution but even since I was drafting the blog, Qwen3 Coder Next was released. It's a functional stop gap if you want to keep things local. I try to be up front in the blog for people to "Reduce your expectations about speed and performance!"

(Also, I love your podcast!)