Do you use openrouter models with continue?

No, but only because our work has to be done on-prem, so we use our self-hosted LLM deployments. If that weren’t a limitation we would for sure