Excellent article! Aptly describes what I have been feeling and thinking about the claims many AI optimists make.

---

> He produced a great deal of code, [...] He could not, when asked, explain how any of it actually worked. [...] When opinions were voiced even as high as a V.P., he fought back.

AI has democratized coding, but people have yet to understand that it takes expertise to actually design a system that can handle scale. Of course, you can build a PoC in a few hours with Claude code, but that wouldn't generate value.

The reason why we see such examples in the workplace is because of the false marketing done by CEOs and wrapper companies. It just gives people a false hope that "they can just build things" when they can only build demos.

Another reason is that the incentives in almost every company have shifted to favour a person using AI. It's like the companies are purposefully forcing us to use AI, to show demand for AI, so that they can get a green signal to build more data centers.

---

> So you have overconfident, novices able to improve their individual productivity in an area of expertise they are unable to review for correctness. What could go wrong?

This is one much-needed point to raise.

I have many people around me saying that people my age are using AI to get 10x or 100x better at doing stuff. How are you evaluating them to check if the person actually improved that much?

I have experienced this excessively on twitter since last few months. It is like a cult. Someone with a good following builds something with AI, and people go mad and perceive that person as some kind of god. I clearly don't understand that.

Just as an example, after Karpathy open-sourced autoresearch, you might have seen a variety of different flavors that employ the same idea across various domains, but I think a Meta researcher pointed out that it is a type of search method, just like Optuna does with hyperparameter searching.

Basically, people should think from first principles. But the current state of tech Twitter is pathetic; any lame idea + genAI gets viral, without even the slightest thought of whether genAI actually helps solve the problem or improve the existing solution.

(Side note: I saw a blog from someone from a top USA uni writing about OpenClaw x AutoResearch, I was like WTF?! - because as we all know, OpenClaw was just a hype that aged like milk)

---

> The slowness was not a tax on the real work; the slowness was the real work.

Well Said! People should understand that learning things takes time, building things takes time, and understanding things deeply takes time.

Someone building a web app using AI in 10 mins is not ahead but behind the person who is actually going one or two levels of abstractions deeper to understand how HTML/JS/Next.js works.

I strongly believe that the tech industry will realise this sooner or later that AI doesn't make people learn faster, it just speeds up the repetitive manual tasks. And people should use the AI in that regard only.

The (real) cognitive task to actually learn is still in the hands of humans, and it is slow, which is not a bottleneck, but that's just how we humans are, and it should be respected.