Many many companies can afford to hire a junior engineer for $150k/year (plus employer payroll taxes, employee benefits etc.).
Are you spending more than $150k per year on AI?
(Also, you're talking about the cost of your Cursor subscription, when the article is about Claude Code. Maybe try Claude Max instead?)
If it could do anything that a junior dev could, that’d be a valid point of comparison. But it continually, wildly performs slower and falls short every time I’ve tried.
A. You're working on some really deep thing that only world-class expects can do, like optimizing graphics engines for AAA games.
B. You're using a language that isn't in the top ~10 most popular in AI models' training sets.
C. You have an opportunity to improve your ability to use the tools effectively.
How many hours have you spent using Claude Code?
> A. You're working on some really deep thing that only world-class expects can do, like optimizing graphics engines for AAA games.
This is a relatively common skill. One thing I always notice about the video game industry is it's much more globally distributed than the rest of the software industry.
Being bad at writing software is Japan's whole thing but they still make optimized video games.
It’s a simple compiler optimization over bayesian statistics. It’s masters-level stuff at best, given that I’m on it instead of some expert. The codebase is mixed python and rust, neither of which are uncommon.
The issues I ran into are primarily “tail-chasing” ones - it gets into some attractor that doesn’t suit the test case and fails to find its way out. I re-benchmark every few months, but so far none of the frontier models have been able to make changes that have solved the issue without bloating the codebase and failing the perf tests.
It’s fine for some boilerplate dedup or spinning up some web api or whatever, but it’s still not suitable for serious work.
Would you expect a junior engineer to perform better than this?
> like optimizing graphics engines for AAA games.
Claude would be worse than an expert at this, but this is a benchmarkable task. Claude can do experiments a lot quicker than a human can. The hard part would be ensure that the results aren't just gaming your benchmark.
The possibility that the performance of these tools still isn't at the level some people need it to be is not an option?
It's insulting that criticism is often met with superficial excuses and insinuation that the user lacks the required skills.
That possibility is covered by A and B.
GP said 'falls short every time I’ve tried'. Note the word 'every'.
Companies are not comparing it straight to juniors. They're more making a comparison between a Senior with the assistance of one more more juniors, vs a Senior with the assistance of AI Agents.
I feel like comparison just to a junior developer is also becoming a fairly outdated comparison. Yes, it is worse in some ways, but also VASTLY superior in others.
It’s funny so many companies making people RTO and spending all this money on offices to get “hallway” moments of innovation, while emptying those offices of the people most likely to have a new perspective.
I am way more productive with $200/month of AI than I would be with $5,000/month of junior developer. And it isn’t close.