What a way to ruin goodwill with the very community they are trying to court. I am on a Pro subscription to use with Claude Code, but it sounds like the days of using it are numbered. I guess I will be trying the latest offering from OpenAI and Google tomorrow and if they are satisfactory I might just switch. Moreover, I have been recommending Anthropic's API solutions up to now to friends and clients. Based on this dumb move I will be now starting with this anecdote and then giving a very hedged recommendation.

Realistically the future of all this is that open models become good enough that LLM as a service becomes a commodity with a race to the bottom in terms of cost. Given where we are today I can easily see open weight models in 2-3 years making Anthropic and OpenAI irrelevant for everyday development work (I justify this like so: if my coding agent is 10x smarter than I am, how would I understand if it did all the right things? I want someone of roughly my intelligence for coding. I can see use cases for like independent pharma work or some such where supergenius level intelligence is justified, but for coding ability for mere mortals to reason about the code is probably more important).

I am on Google's $20/month plan, and I usually get about three half-hour coding sessions a week with AntiGravity using the Claude models. The limit using Gemini Pro models is much higher. I am retired so Google's $20 plan is sufficient for me, but I understand that people who are still working would need higher limits.

I am also on a $10/month plan with Nous Research for supplying open models for their open source Hermes Agent. I run Hermes inside a container, on a dedicated VPS as a coding agent for complex tasks and so far I find the $10/month plan is enough for about five to ten major tasks a month. I think it is also a good deal.

> the very community they are trying to court

After all, we may be a just a data source and not their intended demographic all along.

The valuation is obviously based on the premise of their capturing the white collar economy. OpenAI's charter says so openly. And Chinese robots will come for blue workers next.

The economy, not the workers :) It feels like pretty soon white collar workers will be in a “You have nothing to lose but your chains” situation. Except we are not as fit as the proletariat of the past.

In my experience, Codex is better than Claude Code in every way and GPT-5.4 is on par or better than Opus 4.6 at every coding task I ask of it.

You're really not going to miss CC. And OpenAI actually had some foresight to invest massively in compute so they don't run into usage and rate limits like Anthropic does constantly. I couldn't even use CC for more than a couple complex tasks before I was out of extra usage or session usage. It was a maddening productivity killer and I just switched to Codex full time.

If I could get the equivalent of GPT-4 running locally, that would cover like 95% of what I need an LLM for. Tweak this dockerfile, gimme a bash script. I guess the context probably isn’t sufficient for the agent stuff, but I’m sure more context-efficient harnesses will be coming down the line

I have an old Mac Mini with 32G of integrated RAM, and the following works for me for small local code changes:

ollama launch claude --model qwen3.6:35b-a3b-nvfp4

In addition to not having an integrated web search tool, one drawback is that it runs more slowly than using cloud servers. I find myself asking for a code or documentation change, and then spending two minutes on my deck getting fresh air waiting for a slower response. When using a fast cloud service I can be a coding slave, glued to my computer. Still, I like running local when I can!

[dead]

> I guess I will be trying the latest offering from OpenAI and Google tomorrow and if they are satisfactory I might just switch.

If Anthropic’s move is confirmed, my guess is other coding agents providers might end up making similar moves

Gpt xhigh isn't that bad..

This is the definition of cartel

Kimi K2.6 is supposedly good: https://www.kimi.com/blog/kimi-k2-6

[deleted]

gpt 5.4 has been performing great in my harness.

I have codex and Gemini for spill over, they work good.