This made me want to laugh so hard. I think this idea came from the same place as beta testing “Full Autopilot” with human guinea pigs. Great minds…

Jokes aside, Anthropic CEO commands a tad more respect from me, on taking a more principals approach and sticking to it (at least better than their biggest rival). Also for inventing the code agent in the terminal category.

All things considered Anthropic seems like they’re doing most things the right way, and seemed to be focused on professional use more than OpenAI and Grok, and Opus 4.5 is really an incredibly good model.

Yes, they know how to use their safety research as marketing, and yes, they got a big DoD contract, but I don’t think that fundamentally conflicts with their core mission.

And honestly, some of their research they publish is genuinely interesting.

Dario is definitely more grounded than Sam, I thought Anthropic would get crowded out between Google and the Chinese labs, but they might be able to carve out a decent niche as the business focused AI for people who are paranoid about China.

They didn't invest terminal agents really though, Aider was the pioneer there, they just made it more autonomous (Aider could do multiple turns with some config but it was designed to have a short leash since models weren't so capable when it was released).

I acknowledged the point about Aider being the first terminal agent in a different comment. I am equally surprised at how well Anthropic has done compared to rest of the pack (Mistral comes to mind, had a head start but seems to have lost its way.

They definitely have found a good product-market fit with white collar working professional. 4.5 Opus gets the best balance between smarts and speed.

> Also for inventing the code agent in the terminal category.

Maybe I am wrong, but wasnt aider first?

They are not at all the same thing. For starters, even ‘till this day, it doesn’t support ReAct-based tool calling.

It’s more like an assistant that advices you rather than a tool that you hand full control to.

Not saying that either is better, but they’re not the same thing.

Aider was designed to do single turns becasue LLMs were way worse when it was created. That being said, Aider could do multiple turns of tool calling if command confirmation was turned off, and it was trivial to configure Aider to do multiple turns of code generation by having a test suite that runs automatically on changes and telling Aider to implement functionality to get the tests to pass. It's hard coded to only do 3 autonomous turns by default but you can edit that.

Yes but unfortunately it appears that Aider development has completely stopped. There had been an MCP support PR that was open for over half a year, many people validated it and worked on it but the project owner never responded.

It’s a bit of a shame, as there are plenty of people that would love to help maintain it.

I guess sometimes that’s just how things go.

Aider wasn't really an agentic loop before Claude Code came along

I would love to know more. I used aider with local models and it behaved like cursor in agent mode. Unfortunately I dont remember exactly when (+6 months ago at least). What was your experience with it?

I was a heavy user, but stopped using it mid 2024. It was essentially providing codebase context and editing and writing code as you instructed - a decent step up from copy/paste to ChatGPT but not working in an agentic loop. There was logic to attempt code edits again if they failed to apply too.

Edit: I stand corrected though. Did a bit of research and aider is considered an agentic tool by late 2023 with auto lint/test steps that feedback to the LLM. My apologies.

Plenty of aider-era tools were though, like my own gptme which is about as old as aider

>Also for inventing the code agent in the terminal category.

Not even close. That distinction belongs to Aider, which was released 1.5 years before Claude Code.

Oh cool, I didn’t know that.

let me be a date-time nerd for a split second:

- Claude Code released Introducing Claude Code video on 24 Feb 2025 [0]

- Aider's oldest known GitHub release, v0.5.0, is dated 8 Jun 2025 [1]

[0]: https://www.youtube.com/watch?v=AJpK3YTTKZ4

[1]: https://github.com/Aider-AI/aider/releases/tag/v0.5.0

That’s 8th of June 2023 not 2025.. almost 2 years before Claude Code was released.

I remember evaluating Aider and Cursor side by side before Claude Code existed.

Hey your dates are wildly wrong... It’s important people know aider is 2023. 2 years before CC

Wrong. So wrong, in fact, that I’m wondering if it’s intentional. Aider was June 2023.

sorry, editing it out! thanks for pointing out.

EDIT: I was too late to edit it. I have to keep an eye on what I type...

[deleted]

Anthropic isn't any more moral or principled than the other labs, they just saw the writing on the wall that they can't win and instead decided to focus purely on coding and then selling their shortcomings as some kind of socially conscious effort.

It's a bit like the poorest billionaire flexing how environmentally aware they are because they don't have a 300ft yacht.

Maybe - they’ve certainly fooled me if that’s the case. I took them at face value and so far they haven’t done anything out of character that would make me weary of them.

Their models are good. They did not use prompts for training from day one (Google is the worst offender here amongst the three). Have been shockingly effective with “Claude Skills”. Contributed MCP to the world and encouraged its adoption. Now did the same for skills, turning it into a standard.

They are happy to be just the tool that helps people get the job done.

How do you know?