I'll take the other side of this.

Professional software engineers like many of us have a big blind spot when it comes to AI coding, and that's a fixation on code quality.

It makes sense to focus on code quality. We're not wrong. After all, we've spent our entire careers in the code. Bad code quality slows us down and makes things slow/insecure/unreliable/etc for end users.

However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

There are two forces contributing to this: (1) more people coding smaller apps, and (2) improvements in coding models and agentic tools.

We are increasingly moving toward a world where people who aren't sophisticated programmers are "building" their own apps with a user base of just one person. In many cases, these apps are simple and effective and come without the bloat that larger software suites have subjected users to for years. The code is simple, and even when it's not, nobody will ever have to maintain it, so it doesn't matter. Some apps will be unreliable, some will get hacked, some will be slow and inefficient, and it won't matter. This trend will continue to grow.

At the same time, technology is improving, and the AI is increasingly good at designing and architecting software. We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving. And even when it finally does, if such a point comes, there will still be many years of improvements in tooling, as humanity's ability to make effective use of a technology always lags far behind the invention of the technology itself.

So I'm right there with you in being annoyed by all the hype and exaggerated claims. But the "truth" about AI-assisted coding is changing every year, every quarter, every month. It's only trending in one direction. And it isn't going to stop.

> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

Strongly disagree with this thesis, and in fact I'd go completely the opposite: code quality is more important than ever thanks to AI.

LLM-assisted coding is most successful in codebases with attributes strongly associated with high code quality: predictable patterns, well-named variables, use of a type system, no global mutable state, very low mutability in general, etc.

I'm using AI on a pretty shitty legacy area of a Python codebase right now (like, literally right now, Claude is running while I type this) and it's struggling for the same reason a human would struggle. What are the columns in this DataFrame? Who knows, because the dataframe is getting mutated depending on the function calls! Oh yeah and someone thought they could be "clever" and assemble function names via strings and dynamically call them to save a few lines of code, awesome! An LLM is going to struggle deciphering this disasterpiece, same as anyone.

Meanwhile for newer areas of the code with strict typing and a sensible architecture, Claude will usually just one-shot whatever I ask.

edit: I see most replies are saying basically the same thing here, which is an indicator.

I agree entirely with your statement that structure makes things easier for both LLMs and humans, but I'd gently push back on the mutation. Exactly as mutation is fine for humans it also seems to be fine for LLMs in that structured mutation (we know what we can change, where we can change it and to what) works just fine.

Your example with the dataframes is completely unstructured mutation typical of a dynamic language and its sensibilities.

I know from experience that none of the modern models (even cheap ones) have issues dealing with global or near-global state and mutating it, even navigating mutexes/mutices, conds, and so on.

> LLM-assisted coding is most successful in codebases with attributes strongly associated with high code quality: predictable patterns, well-named variables, use of a type system, no global mutable state, very low mutability in general, etc.

That's all very true, but what you're missing is that the proportion of codebases that need this is shrinking relative to the total number of codebases. There's an incredible proliferation of very small, bespoke, simple, AI-coded apps, that are nonetheless quite useful. Most are being created by people who have never written a line of code in their life, who will do no maintenance, and who will not give two craps how the code looks, any more than the average YouTuber cares about the aperture of their lens or the average forum commenter care about the style of their prose.

We don't see these apps because we're professional software engineers working on the other stuff. But we're rapidly approaching a world where more and more software is created by non-professionals.

> That's all very true, but what you're missing is that the proportion of codebases that need this is shrinking relative to the total number of codebases. There's an incredible proliferation of very small, bespoke, simple, AI-coded apps, that are nonetheless quite useful. Most are being created by people who have never written a line of code in their life, who will do no maintenance, and who will not give two craps how the code looks, any more than the average YouTuber cares about the aperture of their lens or the average forum commenter care about the style of their prose.

I agree that there will be more small, single-use utilities, but you seem to believe that this will decrease the number or importance of traditional long-lived codebases, which doesn't make sense. The fact that Jane Q. Notadeveloper can vibe code an app for tracking household chores is great, but it does not change the fact that she needs to use her operating system (a massive codebase) to open Google Chrome (a massive codebase) and go to her bank's website (a massive codebase) to transfer money to her landlord for rent (a process which involves many massive software systems interacting with each other, hopefully none of which are vibe coded).

The average YouTuber not caring about the aperture of their lens is an apt comparison: the median YouTube video has 35 views[0]. These people likely do not care about their camera or audio setup, it's true. The question is, how is that relevant to the actual professional YouTubers, MrBeast et al, who actually do care about their AV setup?

[0] https://www.intotheminds.com/blog/en/research-youtube-stats/

This is where I get into much more speculative land, but I think people are underestimating the degree to which AI assistant apps are going to eat much of the traditional software industry. The same way smart phones ate so many individual tools, calculators, stop watches, iPods, etc.

It takes a long time for humanity to adjust to a new technology. First, the technology needs to improve for years. Then it needs to be adopted and reach near ubiquity. And then the slower-moving parts of society need to converge and rearrange around it. For example, the web was quite ready for apps like Airbnb in the mid 90s, but the adoption+culture+infra was not.

In 5, maybe 10, certainly 15 years, I don't think as many people are going to want to learn, browse, and click through a gazillion complex websites and apps and flows when they can easily just tell their assistant to do most of it. Google already correctly realizes this as an existential threat, as do many SaaS companies.

AI assistants are already good enough to create ephemeral applications on the fly in response to certain questions. And we're in the very, very early days of people building businesses and infra meant to be consumed by LLMs.

> In 5, maybe 10, certainly 15 years, I don't think as many people are going to want to learn, browse, and click through a gazillion complex websites and apps and flows when they can easily just tell their assistant to do most of it.

And how do you think their assistant will interact with external systems? If I tell my AI assistant "pay my rent" or "book my flight" do you think it's going to ephemerally vibe code something on the banks' and airlines' servers to make this happen?

You're only thinking of the tip of the iceberg which is the last mile of client-facing software. 90%+ of software development is the rest of the iceberg, unseen beneath the surface.

I agree there will be more of this but again, that does not preclude the existence of more of the big backend systems existing.

I don't think we disagree. We still have big mainframe systems from the 70s and beyond that a powering parts of society. I don't think all current software systems are just going to die or disappear, especially not the big ones. But I do think significant double digit percentages of software engineers are working on other types of software that are at risk of becoming first- or second- or third-order casualties in a world where ephemeral AI assistant-generated software and vibe coded bespoke software becomes increasingly popular.

You are vastly overstating the capabilities of LLMs and the capacity and desire of non-technical individuals to use them to create applications.

What's even the point of vague replies like this that disagree with no real evidence, arguments, or examples?

The thing, everything you describe may be easy for an average person in the future. But just having your single AI agent do all of that will be even easier and that seems like where things will go.

Just like everyone has a 3D printer at home?

People want convenience, not a way to generate an application that creates convenience.

And perhaps they'll get that convenience from an application that they don't even know came into existence because they asked their agent to do something.

What, in practice, is the difference between AGI and what you’re suggesting will exist in terms of agent automation?

However, code quality is becoming less and less relevant in the age of AI coding

It actually becomes more and more relevant. AI constantly needs to reread its own code and fit it into its limited context, in order to take it as a reference for writing out new stuff. This means that every single code smell, and every instance of needless code bloat, actually becomes a grievous hazard to further progress. Arguably, you should in fact be quite obsessed about refactoring and cleaning up what the AI has come up with, even more so than if you were coding purely for humans.

Even non-frontier models now offer a context window of 1 million tokens. That's 100K-300K LOCs. I would not call that a limited context.

> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

Strong disagree. I just watched a team spend weeks trying to make a piece of code work with AI because the vibe coded was spaghetti garbage that even the AI couldn’t tell what needed to be done and was basically playing ineffective whackamole - it would fix the bug you ask it by reintroducing an old bug or introducing a new bug because no one understood what was happening. And humans couldn’t even step in like normal because no one understood what’s going on.

Okay, so you observed one team that had an issue with AI code quality. What's your point?

In 1998, I'm sure there were newspaper companies who failed at transitioning online, didn't get any web traffic, had unreliable servers crashed, etc. This says very little about what life would be like for the newspaper industry in 1999, 2000, 2005, 2010, and beyond.

Im arguing that code quality very much still matters and will only continue to matter.

AI will get better at making good maintainable and explainable code because that’s what it takes to actually solve problems tractably. But saying “code quality doesn’t matter because AI” is definitely not true both experientially and as a prediction. Will AI do a better job in the future? Sure. But because their code quality improves not because it’s less important.

Well then sure, we can agree there, it's just a matter of phrasing then.

Then you may want to clarify what your phrasing meant because I couldn’t find a more charitable interpretation

More and more software will be built by non-experts, software that has smaller user bases and simpler use cases and doesn't need to be maintained as much if at all. "Poor AI code quality" matters much less for these than for say, software written by developers at FAANG companies, since literally nobody will ever even look at the code.

Where we're headed is toward a world where a ton of software is ephemeral, apps literally created by AI out of thin air for a single use, and then gone.

Ephemeral in the same way the electrical wiring in an old house is ephemeral.

Which is to say, not at all.

Original wiring done by a professional, later changes by “vibe electrician” homeowners.

Every circuit might be a custom job, but they all accumulate into something a SWE calls “technical debt”.

Don’t like how the toaster and the microwave are on the same circuit even though they are in different parts of the kitchen? You’re lucky if you can even follow the wiring back to the circuit box to see how it was done. The electrical box is so much of a mess where would you even run a new circuit?

That’s the future we’re looking at.

No ephemeral as in: I'll ask the AI to check my email, and it'll create a bespoke table UI on the fly right inside my AI assistant, and populate it with relevant email data. And I'll use it, and then it will disappear. Software created and destroyed in a moment.

Not all software is meant to be some permanent building block upon which other software sits.

When new technology arrives that makes earlier ways of doing things obsolete, the consistent pattern throughout history has been that existing experts and professionals significantly underestimate the changes to come, in large part because (a) they don't like those changes, and (b) they're too used to various constraints and priorities that used to be important but no longer are. In other words, they're judging the new tech the lens of an older world, rather than through the lens of a newer world created by the new tech.

Yeah, I’ve built many one-off scripts in my day, and these days they take 100x less time.

There's almost no point in arguing about this anymore. Neither you nor the other person are going to be convinced. We just have to wait and see if a new crop of 100x productivity AI believer companies come along and unseat all the incumbents.

It seems that your opinion is based on expectations for the future then, which is notoriously difficult to predict.

It's not that hard to predict that obviously useful new technology is going to improve over time.

Guns, wheels, cars, ships, batteries, televisions, the internet, smartphones, airplanes, refrigeration, electric lighting, semiconductors, GPS, solar panels, antibiotics, printing presses, steam engines, radio, etc. The pattern is obvious, the forces are clear and well-studied.

If there is (1) a big gap between current capabilities and theoretical limits, (2) huge incentives for those who to improve things, (3) no alternative tech that will replace or outcompete it, (4) broad social acceptance and adoption, and (5) no chance of the tech being lost or forgotten, then technological improvement is basically a guarantee.

These are all obviously true of AI coding.

That list cherry picks all the successful cases where the technology improved while ignoring the many, many others where it didn't and the technology improved no further. That's dishonest.

It isn't even a good job of cherry picking: we never got mainstream supersonic passenger aircraft after the Concorde because aerospace technology hasn't advanced far enough to make it economically viable and the decrease in progress and massively increasing costs in semiconductors for cutting edge processes is very well known.

You're not factoring in the list of constraints I provided.

There's no broad social acceptance of supersonic flight because it creates incredibly loud sonic booms that the public doesn't want to deal with. And despite that, it's still a bad counterexample, as companies continue to innovate in this area e.g. Boom Supersonic.

At best you can say, "It's taking longer than expected," but my point was never that it will happen on any specific schedule. It took 400 years for guns to advance from the primitive fire lances in China to weapons with lock mechanisms in the 1400s. Those long time frames only prove my point even more strongly. Progress WILL happen, when there is appetite and acceptance and incentive and room to grow, and time is no obstacle. It's one of the more certain things in human history, and the forces behind it have been well studies.

Just as certain: the people and jobs who are obsoleted by these new technologies often remain in denial until they are forgotten.

If code quality only stops mattering in 400 years (whatever that definition happens to be) then the prediction that it makes is worthless in terms of what you should do today. You use it to argue it’s unimportant deal with it, but if it’s a 400 year payoff you’ve made the wrong bet.

Surely you don't think AI coding technology will be as slow to develop as guns were.

We're obviously talking about 1-10 years here, not 100-1000 years.

It’s really hard to predict where exponential progress will freeze. I was reading the other day that the field seems to have stagnated again in terms of no really meaningful ideas to overcome the inherent bottlenecks we’ve hit now in terms of diminishing returns for scaling. I’m not a pessimist or unbridled optimist but I think it’s fundamentally difficult to predict and the law of averages suggests someone will end up crowing about being right

In contrast to AI/AI companies, which have no negative externalities?

[deleted]

But hindsight is 20/20 as they say. In 2020 people predicted that Facebook Horizon would only go one direction, always improve and become as pervasive as the internet. So when you predict that the design and architecture capabilities of models will continue to improve, thus making code quality irrelevant, you sound very confident. And if in five years you are right, you will brag about it here. If not, well I for one will not track you down and rub it in your face. Peace out.

You're confusing betting on a company/product vs betting on technological improvement in general.

It is absolutely the case that virtual reality technology will only get better over time. Maybe it'll take 5, or 10, or 20, or 40 years, but it's almost a certainty that we'll eventually see better AR/VR tech in the future than we have in the past.

Would you bet against that? You'd be crazy to imo.

There's a kid outside the window of the place I'm staying who's been in the yard playing and talking with people online through his VR headset for like 2+ hours. He's living in the future. Whatever happens, he and his friends are going to continue to be interested in more of this.

Whether what they're using in 20 years is produced by the company formerly known as Facebook or not is a whole different question.

The newspaper industry is the perfect analogy, because it is effectively dead. Wholesale dead. Here and there, the biggest, most world-renowned papers are still alive, on life-support... NYT, WSJ, etc. But they're all dead. Their death has caused the absolute destruction of an entire industry sector and has given gangrene to adjacent industries that they will soon succumb to. The point about 1998 wasn't that there was this transition that demanded careful attention and wise strategy, but that death was coming for it no matter what anyone did to stop it.

The death of newspapers is quite the spectacle too. No one seems to understand how bad it is... the youngest generation can't even seem to recognize that anything is missing. We've effectively amateurized journalism so that only grifters and talentless hacks want to attempt it, and only in tiny little soundbites on Twitter or other social media (and they're quickly finding out how it might be more lucrative to do propaganda for foreign governments or MLM charlatanism). When the death of the software industry is complete, it too will have been completely amateurized, the youngest generation will not even appreciate that people used to make it for a living, and the few amateurs doing it will start to comprehend how much more lucrative it will be to just make poorly disguised malware.

I don't buy this at all. Code quality will always matter. Context is king with LLMs, and when you fill that context up with thousands of lines of spaghetti, the LLM will (and does) perform worse. Garbage in, garbage out, that's still the truth from my experience.

Spaghetti code is still spaghetti code. Something that should be a small change ends up touching multiple parts of the codebase. Not only does this increase costs, it just compounds the next time you need to change this feature.

I don't see why this would be a reality that anyone wants. Why would you want an agent going in circles, burning money and eventually finding the answer, if simpler code could get it there faster and cheaper?

Maybe one day it'll change. Maybe there will be a new AI technology which shakes up the whole way we do it. But if the architecture of LLMs stays as it is, I don't see why you wouldn't want to make efficient use of the context window.

I didn't say that you "want" spaghetti code or that spaghetti code is good.

I said that (a) apps are getting simpler and smaller in scope and so their code quality matters less, and (b) AI is getting better at writing good code.

Apps are getting bigger and more ambitious in scope as developers try to take advantage of any boost in production LLMs provide them.

Every metric I've seen points to there being an explosion in (a) the number of apps that exist and (b) the number of people making applications.

What relevance do either of those claims have to the claim of the comment you are responding to?

Are you trying to imply that having more things means that each of them will be smaller? There are more people than there were 500 years ago - are they smaller, or larger?

Also, the printing press did lead to much longer works. There are many continuous book series that have run for decades, with dozens of volumes and millions of words. This is a direct result of the printing press. Just as there are television shows that have run with continuous plots for thousands of hours. This is a consequence of video recording and production technologies; you couldn't do that with stage plays.

You seem to be trying to slip "smaller in scope" into your statement without backing, even though I'd insist that applications individuals wrote being "smaller in scope" was a obvious consequence of the tooling available. I can't know everything, so I have to keep the languages and techniques limited to the ones that I do know, and I can't write fast enough to make things huge. The problems I choose to tackle are based on those restrictions.

Those are the exact things that LLMs are meant to change.

The average piece written and published today today is much shorter than the average piece from the past. Look at Twitter. Social media in general. Internet forums. Blog posts. Emails. Chats. Etc. The amount of this content DWARFS other content.

The same is true of most things that get democratized. Look at video. TikTok, YouTube, YouTube shorts.

Look at all the apps people are building are building for themselves with AI. They are typically not building Microsoft Word.

Of course there will be some apps that are bigger and more ambitious than ever. I myself am currently building an app that's bigger an more ambitious than I would have tried to build without AI. I'm well aware of this use case.

But as many have pointed out, AI is worse at these than at smaller apps. And pretending that these are the only apps that matter is what's leading developers imo to over-value the importance of code quality. What's happening right now that's invisible to most professional engineers is an explosion in the number of time, bespoke personal applications being quickly built by non-developers are that are going to chip away at people's reasons to buy and use large, bloated, professional software with hundreds of thousands of users.

> Look at all the apps people are building are building for themselves with AI.

The apps those people were making before LLMs became ubiquitous were no apps. So by definition they are now larger and more ambitious.

There's already been an explosion of apps - and most of them suck, are spam, or worse, will steal your data.

We don't need more slop apps, we already have that and have for years.

The Jevons paradox says otherwise. As producing apps becomes cheaper, we will not be able to help ourselves: we will make them larger until they fill all available space and cost just as much to produce and maintain.

That's the incorrect application of the Jevons Paradox. We won't get bigger apps, we'll get more apps.

Think about what happened to writing when we went from scribes to the printing press, and from the printing press to the web. Books and essays didn't get bigger. We just got more people writing.

I’ve been told repeatedly now that if AI coding isn’t working for me it’s because my projects code quality is too poor so the agents can’t understand it.

Now I’m being told code quality doesn’t matter at all.

Controversy much :-)

I completely agree. Just going through the beginner & hobbyist forums, the change from "can you help me with code to do X" to "I used ChatGPT/Claude/Copilot to write code to do X" happened with absolutely startling speed, and it's not slowing down. There was clearly a pent-up demand here that wasn't being met otherwise.

People are using AI to get code written. They have no idea what code quality is and only care that what they built works.

AFAICT, every time technology has allowed non-technical people to do more, it's opened up new opportunities for programmers. I don't expect this to be any different, I just want to know where the opportunities are.

Nothing you wrote seems to support what you said at the start there. Why is the importance of code quality decreasing?

  > However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

  > [...]

  > We are increasingly moving toward a world where people who aren't sophisticated programmers are "building" their own apps with a user base of just one person. In many cases, these apps are simple and effective and come without the bloat that larger software suites have subjected users to for years. The code is simple, and even when it's not, nobody will ever have to maintain it, so it doesn't matter. Some apps will be unreliable, some will get hacked, some will be slow and inefficient, and it won't matter. This trend will continue to grow.
I do agree with the fact that more and more people are going to take advantage of agentic coding to write their own tools/apps to maker their life easier. And I genuinely see it as a good thing: computers were always supposed to make our lives easier.

But I don't see how it can be used as an argument for "code quality is becoming less and less relevant".

If AI is producing 10 times more lines that are necessary to achieve the goal, that's more resources used. With the prices of RAM and SSD skyrocketing, I don't see it as a positive for regular users. If they need to buy a new computer to run their vibecoded app, are they really reaping the benefits?

But what's more concerning to me is: where do we draw the line?

Let's say it's fine to have a garbage vibecoded app running only on its "creator" computer. Even if it gobbles gigabytes of RAM and is absolutely not secured. Good.

But then, if "code quality is becoming less and less relevant", does this also applies to public/professional apps?

In our modern societies we HAVE to use dozens of software everyday, whether we want it or not, whether we actually directly interact with them or not.

Are you okay with your power company cutting power because their vibecoded monitoring software mistakenly thought you didn't paid your bills?

Are you okay with an autonomous car driving over your kid because its vibecoded software didn't saw them?

Are you okay with cops coming to your door at 5AM because a vibecoded tool reported you as a terrorist?

Personally, I'm not.

People can produce all the trash they want on their own hardware. But I don't want my life to be ruled by software that were not given the required quality controls they must have had.

> If AI is producing 10 times more lines that are necessary to achieve the goal, that's more resources used. With the prices of RAM and SSD skyrocketing, I don't see it as a positive for regular users. If they need to buy a new computer to run their vibecoded app, are they really reaping the benefits?

I mean, I agree, but you could say this at any point in time throughout history. An engineer from the 1960s engineer could scoff at the web and the explosion in the number of progress and the decline in efficiency of the average program.

An artist from the 1700s would scoff at the lack of training and precision of the average artist/designer from today, because the explosion in numbers has certain translated to a decline in the average quality of art.

A film producer from the 1940s would scoff at the lack of quality of the average YouTuber's videography skills. But we still have millions of YouTubers and they're racking up trillions of views.

Etc.

To me, the chief lesson is that when we democratize technology and put it in the hands of more people, the tradeoff in quality is something that society is ready to accept. Whether this is depressing (bc less quality) or empowering (bc more people) is a matter of perspective.

We're entering a world where FAR more people will be able to casually create and edit the software they want to see. It's going to be a messier world for sure. And that bothers us as engineers. But just because something bothers us doesn't mean it bothers the rest of the world.

> But then, if "code quality is becoming less and less relevant", does this also applies to public/professional apps?

No, I think these will always have a higher bar for reliability and security. But even in our pre-vibe coded era, how many massive brandname companies have had outages and hacks and shitty UIs? Our tolerance for these things is quite high.

Of course the bigger more visible and important applications will be the slowest to adopt risky tech and will have more guardrails up. That's a good thing.

But it's still just a matter of time, especially as the tools improve and get better at writing code that's less wasteful, more secure, etc. And as our skills improve, and we get better at using AI.

If strongly typed languages are preferred for AI coding, maybe the fixation on code quality make LLM produce better code.

Maybe, but how exactly are you defining "code quality" ?

> nobody will ever have to maintain it, so it doesn't matter

I'm curious about software that's actively used but nobody maintains it. If it's a personal anecdote, that's fine as well

I mean I've written some scripts and cron jobs for websites that I manage that have continued trucking for years with no changes or monitoring on my end. I suppose it's a bit easier on the web.

> However, code quality is becoming less and less relevant in the age of AI coding, and to ignore that is to have our heads stuck in the sand. Just because we don't like it doesn't mean it's not true.

It's the opposite, code quality is becoming more and more relevant. Before now you could only neglect quality for so long before the time to implement any change became so long as to completely stall out a project.

That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.

> We are in the very earliest months of AI actually being somewhat competent at this. It's unlikely that it will plateau and stop improving.

We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.

> It's only trending in one direction. And it isn't going to stop.

Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.

Think about it this way: If your job survived the popularity of offshoring to engineers paid 10% of your salary, why would AI tooling kill it?

> That's still true, the only thing AI has changed is it's let you charge further and further into technical debt before you see the problems. But now instead of the problems being a gradual ramp up it's a cliff, the moment you hit the point where the current crop of models can't operate on it effectively any more you're completely lost.

What you're missing is that fewer and fewer projects are going to need a ton of technical depth.

I have friends who'd never written a line of code in their lives who now use multiple simple vibe-coded apps at work daily.

> We hit the plateau on model improvement a few years back. We've only continued to see any improvement at all because of the exponential increase of money poured into it.

The genie is out of the bottle. Humanity is not going to stop pouring more and more money into AI.

> Sure it can. When the bubble pops there will be a question: is using an agent cost effective? Even if you think it is at $200/month/user, we'll see how that holds up once the cost skyrockets after OpenAI and Anthropic run out of money to burn and their investors want some returns.

The AI bubble isn't going to pop. This is like saying the internet bubble is going to pop in 1999. Maybe you will be right about short term economic trends, but the underlying technology is here to stay and will only trend in one direction: better, cheaper, faster, more available, more widely adopted, etc.

> What you're missing is that fewer and fewer projects are going to need a ton of technical depth. > I have friends who'd never written a line of code in their lives who now use multiple simple vibe-coded apps at work daily.

Again it's the opposite. A landscape of vibe coded micro apps is a landscape of buggy, vulnerable, points of failure. When you buy a product, software or hardware, you do more than buy the functionality you buy the assurance it will work. AI does not change this. Vibe code an app to automate your lightbulbs all you like, but nobody is going to be paying millions of dollars a year on vibe coded slop apps and apps like that is what keeps the tech industry afloat.

> Humanity is not going to stop pouring more and more money into AI.

There's no more money to pour into it. Even if you did, we're out of GPU capacity and we're running low on the power and infrastructure to run these giant data centres, and it takes decades to bring new fabs or power plants online. It is physically impossible to continue this level of growth in AI investment. Every company that's invested into AI has done so on the promise of increased improvement, but the moment that stops being true everything shifts.

> The AI bubble isn't going to pop. This is like saying the internet bubble is going to pop in 1999.

The internet bubble did pop. What happened after is an assessment of how much the tech is actually worth, and the future we have now 26 years later bears little resemblance to the hype in 1999. What makes you think this will be different?

Once the hype fades, the long-term unsuitability for large projects becomes obvious, and token costs increase by ten or one hundred times, are businesses really going to pay thousands of dollars a month on agent subscriptions to vibe code little apps here and there?

> Again it's the opposite. A landscape of vibe coded micro apps is a landscape of buggy, vulnerable, points of failure. When you buy a product, software or hardware, you do more than buy the functionality you buy the assurance it will work. AI does not change this. Vibe code an app to automate your lightbulbs all you like, but nobody is going to be paying millions of dollars a year on vibe coded slop apps and apps like that is what keeps the tech industry afloat.

This is what everyone says when technology democratizes something that was previously reserved for a small number of experts.

When the printing press was invented, scribes complained that it would lead to a flood of poorly written, untrustworthy information. And you know what? It did. And nobody cares.

When the web was new, the news media complained about the same thing. A landscape of poorly researched error-ridden microblogs with spelling mistakes and inaccurate information. And you know what? They were right. That's exactly what the internet led to. And now that's the world we live in, and 90% of those news media companies are dead or irrelevant.

And here you are continuing the tradition of discussing a new landscape of buggy, vulnerable products. And the same thing will happen and already is happening. People don't care. When you democratize technology and you give people the ability to do something useful they never could do before without having to spend years becoming an expert, they do it en masse, and they accept the tradeoffs. This has happened time and time again.

> The internet bubble did pop... the future we have now 26 years later bears little resemblance to the hype in 1999. What makes you think this will be different?

You cut out the part where I said it only popped economically, but the technology continued to improve. And the situation we have now is even better than the hype in 1999:

They predicted video on demand over the internet. They predicted the expansion of broadband. They predicted the dominance of e-commerce. They predicted incumbents being disrupted. All of this happened. Look at the most valuable companies on earth right now.

If anything, their predictions were understated. They didn't predict mobile, or social media. They thought that people would never trust SaaS because it's insecure. They didn't predict Netflix dominating Hollywood. The internet ate MORE than they thought it would.

Your whole argument is based on 'the technology improves'.

Ok, so another fundamental proposition is monetary resources are needed to fund said technology improvement.

Whats wrong with LLMs? They require immense monetary resources.

Is that a problem for now? No because lots of private money is flowing in and Google et al have the blessing of their shareholders to pump up the amount of cash flows going into LLM based projects.

Could all this stop? Absolutely, many are already fearing the returns will not come. What happens then? No more huge technology leaps.

This has literally never happened in the history of humanity. Name one technology where development permanently stopped due to lack of funding, despite there being...

1. lots of room for progress, i.e. the theoretical ceiling dwarfed the current capabilities

2. strong incentives to continue development, i.e. monetary or military success

3. no obviously better competitors/alternatives

4. social/cultural tolerance from the public

Literally hasn't happened. Even if you can find 1 or 2 examples, they are dwarfed by the hundreds of counter examples. But more than likely, you won't find any examples, or you'll just find something recent where progress is ongoing.

Useful technology with room to improve almost always improves, as people find ways to make it better and cheaper. AI costs have already fallen dramatically since LLMs first burst on the scene a few years back, yet demand is higher than ever, as consumers and businesses are willing to pay top dollar for smarter and better models.

AI has none of these things.

1. As I said before, we've long since reached diminishing returns on models. We simply don't have enough compute or training data left to make them dramatically better.

2. This is only true if it actually pans out, which is still an unknown question.

3. Just... not using it? It has to justify its existence. If it's not of benefit vs. the cost then why bother.

4. The public hates AI. The proliferation of "AI slop" makes people despise the technology wholesale.

1. Saying that AI will never approach its theoretical limits because XYZ tech is approaching diminishing returns, is like saying guns would never get better than the fire sticks of China in 1000 AD because the then-current methods hit their theoretical limits. You're betting against tens of thousands of the smartest minds of a generation across the entire planet. I will happily take the other side of this bet.

2. Sure, depends on #1. But the incentive is undeniable.

3. It has. Do you think people are using Claude Code in incredible numbers for no reason?

4. The public and businesses are adopting AI en masse. It's incredibly useful. Demand is skyrocketing. I don't think you could show that negative public sentiment has been sufficient to stop this, any more than negative sentiment about TVs, headphones, bicycles, etc (which was significant).

With the exception of #1, I feel like you're arguing that things won't happen, where the numbers show they've already have happened and are accelerating.

Thanks for jumping in fella. Agree on all points.

> This is what everyone says when technology democratizes something that was previously reserved for a small number of experts.

What part of renting your ability to do your job is "democratizing"? The current state of AI is the literal opposite. Same for local models that require thousands of dollars of GPUs to run.

Over the past 20 years software engineering has become something that just about anyone can do with little more than a shitty laptop, the time and effort, and an internet connection. How is a world where that ability is rented out to only those that can pay "democratic"?

> When the printing press was invented, scribes complained that it would lead to a flood of poorly written, untrustworthy information. And you know what? It did. And nobody cares.

A bad book is just a bad book. If a novel is $10 at the airport and it's complete garbage then I'm out $10 and a couple of hours. As you say, who cares. A bad vibe coded app and you've leaked your email inbox and bank account and you're out way more than $10. The risk profile from AI is way higher.

Same is even more true for businesses. The cost of a cyberattack or a outage is measured in the millions of dollars. It's a simple maths, the cost of the risk of compromise far oughtweights the cost of cheaper upfront software.

> You cut out the part where I said it only popped economically, but the technology continued to improve.

The improvement in AI models requires billions of dollars a year in hardware, infrastructure, end energy. Do you think that investors will continue to pour that level of investment into improving AI models for a payout that might only come ten to fifteen years down the road? Once the economic bubble pops, the models we have are the end of the road.

Dont waste your time on him. He reminds me of people who are so concentrated on one part of the picture, they can't see the whole damn thing and how all the pieces fit and interact with each other.

You're describing yourself imo. Your point ignores hundreds of years of history and says zero about the forces that shape technological development and progress, which have been studied fairly exhaustively.

[flagged]

"Thousands of dollars of GPU" as a one-time expense (not ongoing token spend) is dirt cheap if it meaningfully improves productivity for a dev. And your shitty laptop can probably run local AI that's good enough for Q&A chat.

On a SWE salary maybe. If the baseline cost of doing business is a $5k GPU you've excluded like a quarter of the US working population immediately.

> What part of renting your ability to do your job is "democratizing"? The current state of AI is the literal opposite. Same for local models that require thousands of dollars of GPUs to run.

"Renting your ability to do your job"?

I think you're misunderstanding the definition of democratization. This has nothing to do with programmers. It has nothing to do with people's jobs. Democratizing is defined as "the process of making technology, information, or power accessible, available, or appealing to everyone, rather than just experts or elites."

In other words, democratizing is not about people who who have jobs as programmers. It's about the people who don't know how to code, who are not software engineers, who are suddenly gaining the ability to produce software.

Three years ago, you could not pay money to produce software yourself. You either had to learn and develop expertise yourself, or hire someone else. Today, any random person can sit down and build a custom to-do list app for herself, for free, almost instantly, with no experience.

> The improvement in AI models requires billions of dollars a year in hardware, infrastructure, end energy. Do you think that investors will continue to pour that level of investment into improving AI models for a payout that might only come ten to fifteen years down the road? Once the economic bubble pops, the models we have are the end of the road.

10-15 year payouts? Uhhh. Maybe you don't know any AI investors, but the payout is coming NOW. Many tens of thousands of already gotten insanely rich, three years ago, and two years ago, and last year, and this year. If you think investors won't be motivated, and there aren't people currently in line to throw their money into the ring, you're extremely uninformed about investor sentiment and returns lol.

You can predict that the music will stop. That's fair. But to say that investors are worried about long payout times is factually inaccurate. The money is coming in faster and harder than ever.

I have no idea what this flood of personal-use software is that you think normal people want to produce. Normal people don't even think about software doing a thing until they see an advertisement about software that does a thing. And then they'd rather pay 10 bucks for it than to invent a shittier version of it themselves for $500.

And I'm not being condescending about normal people. Developers often don't think about the possibility of making software that does a particular thing until they actually see software that does that thing. And they're going to also going to prefer to buy than vibe code unless the program is small and insignificant.

Go look at the numbers from Lovable and Replit and Claude Code and similar companies. Quite staggering.

I myself have run an online community for early-stage startup founders for over a decade. The number of ambitious people who would love to build something but don't know how to code and in the last year or two have started cranking out applications is tremendous. That number is far higher than the number of software engineers who existed before.

That's very much an echo chamber you find yourself in. I'm far away from any technological center and the main use of LLM for people is the web search widget, spell checking and generating letters. Also kids cheating on their homework.

> Democratizing is defined as "the process of making technology, information, or power accessible, available, or appealing to everyone, rather than just experts or elites."

Your definition only supports my point. The transfer of skill from something you learn to something you pay to do is the exact and complete opposite of your stated definition. It turns the activity from something that requires you to learn it to one that only those that can afford to pay can do.

It is quite literally making this technology, information, and power available to only the elite.

> Uhhh. Maybe you don't know any AI investors, but the payout is coming NOW.

What payout? Zero AI companies are profitable. If you're invested in one of these companies you could be a billionaire on paper, but until it's liquid it's meaningless. There's plenty of investors who stand to make a lot of money if these big companies exit, but there's no guarantee that will happen.

The only people making money at the moment are either taking cash salaries from AI labs or speculating on Nvidia stock. Neither of which have much do with the tech itself and everything to do with the hype.

> It is quite literally making this technology, information, and power available to only the elite.

I don't know what to say to you. More people are coding now with AI than ever coded before. If your argument was true, then that would just mean that there are more elites than ever. Obviously that's not what's happening.

> What payout? Zero AI companies are profitable.

Because they're reinvesting profits into continued R&D, not because their current products are unprofitable. You're failing to understand basic high-growth business models.

> If you're invested in one of these companies you could be a billionaire on paper, but until it's liquid it's meaningless.

Plenty of AI companies have exited, and plenty of other AI companies offer tender offers where shareholders have been able to sell their shares to new investors. Again, it sounds like you just aren't really educated on what's happening. Plenty of people are millionaires in real life, not just on paper. You're massively incorrect about the payout landscape that investors are considering.

> The only people making money at the moment are either taking cash salaries from AI labs or speculating on Nvidia stock.

No, founders, early-stage investors, and employees with stock have cashed out in many cases. Again, it just feels like you're not aware of what's happening on the ground.

> Neither of which have much do with the tech itself and everything to do with the hype.

That's a very different argument. If you want to say that the investment is unsound, then fine, that's your opinion, but trying to say that investors have no appetite because they have to wait 10 to 15 years for a payout is incredibly incorrect.

> I don't know what to say to you. More people are coding now with AI than ever coded before. If your argument was true, then that would just mean that there are more elites than ever. Obviously that's not what's happening.

I don't know how I can explain this any more clearly.

If you need AI to create software, and the cost of AI is $200/month, then only people who can afford $200/month can create software.

Costs will increase. The current cost is substituted by investor funding. Sell at a loss to get people hooked on the product and then raise the price to make money, a "high-growth business model" as you say.

The cost to make a competitor to Anthropic or OpenAI is tens or hundreds of billions of dollars upfront. There will be few competitors and minimal market pressure to reduce prices, even if the unit costs of inference are low.

$200/month is already out of reach of the majority of the population. Increases from here means only a small percentage of the richest people can afford it.

I don't know what definition of "elite" you're using but, "technology limited so that only a small percentage of the population can afford it" is... an elite group.

This is fun and all, but I think we've reached the end of the productive discussion to be had and I don't have much more to say. Charitably, we're leaving in completely different realities. I just hope when the bubble pops the fall isn't too hard for you.

> I don't know how I can explain this any more clearly. If you need AI to create software, and the cost of AI is $200/month, then only people who can afford $200/month can create software.

Your entire hypothetical is based on "ifs" that aren't true. Nothing in this sentence is true. You don't need AI to create software, the cost of AI development is much less than $200/month on average, and many more people can afford AI dev than programming bootcamps or classes or degrees.

> Costs will increase. The current cost is substituted by investor funding. Sell at a loss to get people hooked on the product and then raise the price to make money, a "high-growth business model" as you say.

Inference is already profitable at current pricing. Most funding goes toward R&D for new model training, not inference.

Also, inference costs dropped over 280x between Nov 2022 and Oct 2024. Inference will continue to get cheaper as we develop more specialized hardware and efficient models.

This is not Uber, subsidizing the cost of human drivers. This is real tech, chips and servers and software. Costs fall over time, not rise. Innovation does not go backwards.

> $200/month is already out of reach of the majority of the population.

1. You can build small applications with the $20/month sub, much more with the $100/month. Competition and technology improvements will inevitably improve the price to value ratio.

2. Cable sports subscriptions are in a similar price range. Expensive, but not exclusive to “the elites”.

The median per capita income in the United States is $37,683/year.[0] Depending on your state, after taxes, that's something like ~$2,600/month. You're asking almost 10% of their post-tax income to this just for the opportunity to create software. With rent, food, and other living expenses many households at that income level simply cannot afford this.

This is the median income. If it's a struggle for someone on this income then it's worse for half of all Americans, and American incomes are higher than most of the rest of the world.

[0]: https://en.wikipedia.org/wiki/Per_capita_personal_income_in_...

The bar for "create software" up to this last year or so was "learn software development" or "pay someone else".

Personally, I think millions more people having the ability to create some subset of software is an incredible shift.

[flagged]

> $200/month is already out of reach of the majority of the population. Increases from here means only a small percentage of the richest people can afford it.

This is an absurd claim. There are many things the majority of the population spends money on that cost more than this.

I'm going to take your comment at face value, and I'm also going to assume that you're US-based.

You need to take a step back and look at the economic reality of the majority of Americans today. Many live paycheck-to-paycheck, even those with "middle class" incomes. For many a $200 one-off bill is debilitating, yet alone a recurring subscription. If you don't know that, you have a dangerously narrow view of the economy.

If you think that a $200/month subscription is "out of reach" for the majority of Americans, you are just plainly and simply wrong about that. They might have to make some tradeoffs by reducing spending in other areas, but that's part of life.