I just so happen to be doing a price comparison for different cloud LLM providers right now. It turns out some of the cheapest providers with the highest limits are ones you might not have heard of.

OpenCode Go has the simplest plan at the highest rate limits for any subscription plan with multiple model families, and it's $10/month ($5/month for first month). With the cheapest model in the plan (MiniMax M2.5), it is a 13x higher rate than Claude Max, at 1/10th the price. The most expensive model (GLM 5.1) gives you a rate of 880 per 5h, which is more than any other $10 plan. I don't expect this price to last, it makes no sense. OpenCode also has a very generous free tier with higher rates than some paid plans, but the free models do collect data.

The cheapest plan of all is free and unlimited - GitHub Copilot. They offer 3 models for free with (supposedly) no limit - GPT-4o, GPT-4.1, and GPT-5-mini. I would not suggest coding with them, but for really basic stuff, you can't get better than free. I would not recommend their paid plans, they actually have the lowest limits of any provider. They also have the most obtuse per-token pricing of any provider. (FYI, GitHub Copilot OAuth is officially supported in OpenCode)

The next cheapest unlimited plan is BlackBox Pro. Their $10/month Pro plan provides unlimited access to MiniMax M2.5. This model is good enough for coding, and the unlimited number of requests means you can keep churning with subagents long after other providers have hit a limit.

The next cheapest is MiniMax Max, a plan from the makers of MiniMax. For $50/month, you get 15,000 requests per 5-hours to MiniMax M2.7. This is not as cheap as OpenCode Go, which gives you 20,000 requests of MiniMax M2.5 for $10, but you are getting the newer model.

If you don't want to use MiniMax, the next cheapest is Chutes Pro. For $20/month, you get a monthly limit of 5,000 requests.

I'll be adding more of these as I find them to this spreadsheet: https://codeberg.org/mutablecc/calculate-ai-cost/src/branch/...

Note: This calculation is inaccurate, for multiple reasons. For one, it's entirely predicated on working 8 hours a day, 22 days a month; I'll recalculate at some point to find cheapest if you wanted to churn 24/7. For another, some providers (coughANTHROPIC) don't actually tell you what their limits are, so we have to guess and use an average. But based on my research, the calculations seems to match up with the per-request API cost reported at OpenRouter. Happy to take suggestions on improvements.

I have been so far happy with the value that Copilot brought but for the past few weeks I have felt the chokehold on the number of requests.

I have had the chance to test the main Chinese models through OpenRouter but the Pay-as-you-go model is expensive compared to a subscription model, but I don't want to marry to a single provider.

Thanks for bringing OpenCode Go to my attention. Your comparison is the research I didn't know I needed, and I will be cancelling my Copilot subscription to replace it with OpenCode Go right away.

Yep, I was comparing opencode go ($10/month) with copilot pro ($10/month) this morning.

opencode go gives about 14x the requests of copilot pro. I was like, there must be something not right.

Then I compared the best model GLM5.1 on opencode go, and antropic opus 4.6, yes opus is better on most benchmarks, but glm 5.1 is not too far behind.