> Then when Claude is down for an hour, they get visibly angry and don't remember how to do anything pre-Claude :)

The drug is scary when everyone is depending on it. I wonder what is future like.

The future is perpetually dealing with the fallout from all the vibe coding as the pool of people who'd have a shot at fixing it gets smaller and smaller. Shitty will be the new normal.

I feel like it will be like going back to the 80s, when PCs became a norm and most programmers and hobbyists could code without the need of a University or a Corporation. Thousands of shareware apps you had to navigate, everyone trying to solve the same problems from different angles..

I do agree quality will be missed, and shadow IT will be again a big issue like at the end of the 80s and early 90s.

I imagine a much darker future when it’s almost every enterprise system known for stability is now unstable.

Planes falling out of the sky, trains crashing into each other, pacemakers downloading updates and freezing

Coding on 8 and 16 bit home computers still required some skills that most vibe coders certainly lack.

> most programmers and hobbyists could code without the need of a University or a Corporation.

I don't think so. Back then, the pool of people doing such a thing basically self-selected for intelligent, motivated types who were capable of learning on their own. The new "programmers" "programming" via Claude Code are going to be very different from those hobbyists you're talking about.

This is a comically self-absorbed perspective.

Why are people making things with Claude Code if not because they’re motivated?

I think the point is that you had to be deeply curious and more of a "hacker" or "computer nerd" type to be able to figure things out.

But I think the same applies to not just AI but various tools that have abstracted away the complexity of things over the years.

For example, I would imagine the average person deploying some sort of web app or API today knows far less about networking and infrastructure than someone doing it 10 to 20 years ago.

Yes, exactly. I'm reminded of the articles detailing how Gen Z has fewer computer skills than previous generations because computing has become so abstracted -- turn on iPhone, tap button. "What's a directory?" -- files just kind of exist on these devices without any real notion of _where_, as far as the user knows. Stuff like that.

Compare that to say 30ish years ago. If you wanted to do something as simple as play a computer game you had to know how to navigate a command line, know about device drivers, make a boot disk, etc. Users were a whole lot closer to the realities of what makes computing work. And no internet, at least as we know it now. You really had to have a certain mindset to be a developer.

It's a far cry from "hey Claude make an app."

Knowing that genuine, disincentivized creativity is exceedingly rare (especially in the West), you can assume that the answer looks something like a carrot or a stick.

Because it's "easy"?

Because it's "easy" (until they hit a wall)

Once they hit a wall, that is where you find out whether they are motivated or not

> Once they hit a wall, that is where you find out whether they are motivated or not

Yep. That has to happen first.

Eventually there will be an incident with bad software at a hospital or bank that leaves some people dead or broke.

Then regulators will take things seriously.

This is exactly what Uncle Bob predicted in his talk "The Future Of Programming" [0] 10 years ago, way before LLMs.

[0] https://www.youtube.com/watch?v=ecIWPzGEbFc

Which is why the medical device software industry is so heavily regulated after the Therac-25 incident. Oh, wait, it's not.

https://en.wikipedia.org/wiki/Therac-25

What regulators?

> as the pool of people who'd have a shot at fixing it gets smaller and smaller

Sounds like job prospects to me.

> Shitty will be the new normal.

I’ve heard the same from the best devs, and some who thought themselves to be the best, I’ve known long before LLMs were ever a thing.

I’m sure others heard the same when JavaScript and Python became near ubiquitous. When PHP emerged. When C supplanted Fortran and COBOL. When these two took over from Assembly. When punch cards went the way of the dodo.

There’s always someone for whom shitty is becoming the new normal. If that makes it a rule, what do we make of that rule?

There are different magnitudes of shitty.

Also we went from compilers with an IDE that had a debugger, profiler, built-in help and would fit on a 3.5" disk and would load on machines with 640KiB RAM (Turbo Pascal) to chat apps or password managers that are hundreds of megabytes and regularly gobble up more than a gigabyte of memory because they ship with their own browser.

Something is lost along the way.

> I’m sure others heard the same when JavaScript and Python became near ubiquitous. When PHP emerged.

You heard right! Most JavaScript and PHP in the world _is_ profoundly shitty. It's taken 20 years of intense research to make JavaScript compilers that are almost good enough to mostly optimize away the design foibles of the language.

To be fair with how powerful our computers are, it's a pity that electron apps like bitwarden, spotify are so slow and consume so much resources. I do miss the time when a lot of apps were snappy

I am just going to justify in the future that because of LLMs, there is no reason to use JavaScript, Java, Python etc anymore because of the available workforce. Only then when the technology itself is fit for the job.

As you say - "good enough" is always the normal.

Maybe it’s a process. Many of the transitions you mentioned did bring shitty apps (not all of them, the ones replacing tech for tech were mostly ok, the ones democratizing dev did come with a quality drop), but eventually Darwinism will take effect and trim the long tail.

Coding per se is not hard. Proper engineering is. I do hope this change brings a change in focus (people train in algorithms, efficiency, solid development patterns) but I am afraid it won’t be the case.

"With a punchcard at least, I can verify what the input is! Unlike those new 'transistors' that are so unreliable!"

What do you think a transistor is

I'm working on a possibly-quixotic tool to mitigate the "cognitive debt" from AI-assisted development. Not everybody agrees that this is a problem. Maybe some teams that are only writing specs and reviewing plans still understand their products adequately. If you have an opinion either way, I'd appreciate hearing from you.

I think there are some pretty good ways to understand it now.

When the electricity goes out, (most) people get similarly upset. No electricity means no internet, and all of a sudden everything that people had planed to do can’t be done until the power returns.

Same as anything else. It’ll go down sometimes, people will take a break and chat, then it will come back up.

Like Slack or GitHub or AWS or whatever. It’s almost always a net positive to wait vs do it yourself.

I'm more scared at everyone outsourcing their thinking to a private, for-profit company.

What could possibly go wrong.

Thinking, yes, but also secrets, access and effective control of important services in every country and company worldwide, centralized in the US (or anywhere else) where the NSA can take the driver's seat at any time. "AI" is the ultimate sleeper agent.

I have been saying things to this effect for a few years now, and have literally been laughed at. I feel like that guy that suggested that doctors should wash their hands before operating on patients -- they laughed at him too, before they put him in an asylum. What's going to happen, is that everyone who realizes that these policies are a mistake, is going to quietly retcon their own role in that mistake, while scapegoating everyone that they don't like.

Also, would bet money that the derived data from the meeting-summarizers is being sold to hedge-funds, to give them a bit of an edge.

> Also, would bet money that the derived data from the meeting-summarizers is being sold to hedge-funds, to give them a bit of an edge.

And if it isn't already, you can be that they're probably to start.

All those "difficult to program but easy-if-time-consuming-for-human" tasks, will 1000% be farmed out to models at unprecedented scales.

yeah. I mean, I think (as someone similar to you) the truth is not rewarded because we are in an age where deception as the norm is. Or maybe that's always how it was as humans, and we were simply too naive and gullible to notice before?

The incentives reward this kind of behavior. I wonder then how to operate in a world that is low of moral values and ethicality - does it mean I have to do so to have a fair shot? I'd like to think not.

I think the scenario was more of, if really everyone depends on claude, then better nothing critical(medical software, aviation, traffic controll ..) breaks while claude is offline.

The good thing is we've learned this already from cloud. When one AWS region is degraded we all failover to other regions, and then other cloud providers, right? ...right?

At least some of the projects in these industries now specify strict no-AI-use policies in contracts. I participate in a few of these, and it’s becoming a bit of a pain, because all dev tool vendors insist on adding AI features, and if there’s no way to turn them off completely we have to migrate away.

However, the temptation of productivity gains are strong, and few of the customers look into relaxing these rules.

What about when you work at Anthropic?

> The drug is scary when everyone is depending on it. I wonder what is future like.

I can't wait for a Hollywood blockbuster that'll pretty much be science non-fiction.

> wonder what is future like

Probably "don't do anything to upset AI companies or you will effectively become a handicapped person"

Not that different from life in China: "don't do anything to upset Tencent and AliPay or you will become an outcast"

Or life in the US if you're a content creator: "don't do anything to upset Meta or Youtube or you will not be able to pay your rent"

The future: ToS basically becomes law, and you will be stripped of your own second brain if you violate it or say anything they deem "sensitive"

Full of security holes

Seems far less scary to me than, say, building an electrical grid in a cold climate, where if it fails for a few days people start to die. Oh wait...

Why would they die in cold climate? I would expect them to die in hot climate (no AC - heat stroke, no refrigerator - food poisoning), not the cold where they would have wood/gas heating.

Electricity is very predictable and not under control of one or two nations.

which is more likely when they start vibe-coding grid managers

It's the same, on steroids.

same was said about electricity.

Imagine what happens if computers stop working* and you have to go back to pen and paper for a few days.

* ransomware attack, fire in the server room, database HDD crash, car accident takes out the internet connection, ...