The blog post has a bunch of charts, which gives it a veneer of objectivity and rigor, but in reality it's just all vibes and conjecture. Meanwhile recent empirical studies actually point in the opposite direction, showing that AI use increases inequality, not decrease it.

https://www.economist.com/content-assets/images/20250215_FNC...

https://www.economist.com/finance-and-economics/2025/02/13/h...

Of course AI increases inequality. It's automated ladder pulling technology.

To become good at something you have to work through the lower rungs and acquire skill. AI does all those lower level jobs, puts the people who need those jobs for experience on the street, and robs us of future experts.

The people who benefit the most are those who are already up on top of the ladder investing billions to make the ladder raise faster and faster.

AI has been extremely useful at teaching me things. Granted I needed to already know how to learn and work through the math myself, but when I get stuck it is more helpful than any other resource on the internet.

> To become good at something you have to work through the lower rungs and acquire skill. AI does all those lower level jobs, puts the people who need those jobs for experience on the street, and robs us of future experts.

You can still do that with AI, you give yourself assignments and then use the AI as a resource when you get stuck. As you get better you ask the AI less and less. The fact that the AI is wrong sometimes is like test that allows you to evaluate if you are internalizing the skills or just trusting the AI.

If we ever have AIs which don't hallucinate, I'd want that added back in as a feature.

Not everyone have the privelege of learning for free or the time, many needs that lower level job that makes it possible to get paid and learn at the same time.

Whether ladder raising is benefitting people now or later or by how much - I don't know.

But I share your concerns that:

AI doing the lesser tasks of [whatever] ->

less(no?) humans will do those tasks ->

less(no?) experienced humans to further the state of the art ->

automation-but-stagnation.

But tragedy of the commons says I have to teach my kid to use AI!

You could just teach them to be gardeners or carpenters

They would still need to use AI to run their work with higher profit margin ;)

When you have an unfair system, every technology advancement will benefit the few more than the many.

So off course AI falls into this realm.

Definitely. I think it's worse than that too. I have a feeling it's going to expose some people higher up that ladder who really shouldn't be. So it won't just be junior people who struggle but also "senior" people as well. I think that only deepens the inequality.

It's the trajectory of automation for the past few decades. Automate many jobs out of existence, and add a much smaller set of higher-skill jobs.

Yep but this time is different because it's going to completely displace what is left of the middle class and this time the people being fucked have six figures of college debt.

Centuries, surely? "In the year of eighteen and two, peg and awl..."

AI can teach you the lower rungs more effectively than what existed before.

Honestly not sure it is easier to learn coding today than before. In theory maybe but in reality 99% of people will use AI as a crutch - half or learning is when you have to struggle a bit with something. If all the answers are always in front of you it will be harder to learn. I know it would be hard for me to learn if I could just ask for the code all the time.

It is but requires discipline.

I've been coding for 15 years but I find I'm able to learn new languages and concepts faster by asking questions to ChatGPT.

It takes discipline. I have to turn off cursor tab when doing coding exercises. I have to take the time to ask questions and follow-up questions.

But yes I worry it's too easy to use AI as a crutch

It's much, much, much easier.

I've been coding for decades already, but if I need to put something together in an unfamiliar language? I can just ask AI about any stupid noob mistake I make.

It knows every single stupid noob mistake, it knows every "how do I sort an array", and it explains well, with examples. Like StackOverflow on steroids.

The caveat is that you need to WANT to learn. If you don't, then not learning is easier than ever too.

> I've been coding for decades already, but if I need to put something together in an unfamiliar language? I can just ask AI about any stupid noob mistake I make.

So you aren’t still learning foundational concepts or how to think about problems, you are using it as a translation tool. Very different, in my opinion.

And yet it's not used that way in the vast majority of cases. Most people don't want to learn. They want to get a result quickly, and move on.

There is a difference between pulling up a ladder and people choosing not to climb it.

I agree with you - I learned to program because I found it fascinating, and wanted to know how my computer worked, not because it was the only option available to me at the time...

There are always people willing to take shortcuts at long-term expense. Frankly I'm fine with the selection pressure changing in our industry. Those who want to learn will still find a way to do it.

It’s a very small difference. People would rather line up for the elevator than take the stairs. That’s just human nature.

Yeah, the graphs make some really big assumptions that don't seem to be backed up anywhere except AI maximalist head canon.

There's also a gap in addressing vibe coded "side projects" that get deployed online as a business. Is the code base super large and complex? No. Is AI capable of taking input from a novice and making something "good enough" in this space? Also no.

The later remarks are very strong assumptions underestimating the power AI tools offer.

AI tools are great at unblocking and helping their users explore beyond their own understanding. The tokens in are limited to the users' comprehension, but the tokens out are generated from a vast collection of greater comprehension.

For the novice, it's great at unblocking and expanding capabilities. "Good enough" results from novices are tangible. There is no doubt the volume of "good enough" is perceived as very low by many.

For large and complex codebases, unfortunately the effects of tech debt (read: objectively subpar practices) translate into context rot at development time. A properly architected and documented codebase that adheres to common well structured patterns can easily be broken down into small easily digestible contexts. i.e. a fragmented codebase does not scale well with LLMs, because the fragmentation is seeding the context for the model. The model reflects and acts as an amplifier to what it's fed.

> For the novice, it's great at unblocking and expanding capabilities. "Good enough" results from novices are tangible. There is no doubt the volume of "good enough" is perceived as very low by many.

For personal tools or whatever, sure. And the tooling or infrastructure might get there for real projects eventually, but it’s not currently. The prospect of someone naively vibe coding a side business including a payment or authentication system or something that stores PII— all areas developers learn the dangers of through the wisdom gained only by experience— sends shivers down my spine. Even amateur coders trying that stuff try old fashioned way must read their code and the docs and info on the net and such and will likely get some sense of the danger. Yesterday I saw someone here recounting a disastrous data breach of their friend’s vibe coded side hustle.

The big problem I see here is people not knowing enough to realize that something functioning is almost never a sign that it is “good enough” for many things they might assume it is. Gaining the amount of base knowledge to evaluate things like form security nearly makes the idea of vibe coding useless for anything more than hobby or personal utility projects.

> For large and complex codebases, unfortunately the effects of tech debt (read: objectively subpar practices) translate into context rot at development time. A properly architected and documented codebase that adheres to common well structured patterns can easily be broken down into small easily digestible contexts. i.e. a fragmented codebase does not scale well with LLMs, because the fragmentation is seeding the context for the model. The model reflects and acts as an amplifier to what it's fed.

It seems like you're claiming complex codebases are hard for LLMs because of human skill issues. IME it's rather the opposite - an LLM makes it easier for a human to ramp up on what a messy codebase is actually doing, in a standard request/response model or in terms of looking at one call path (however messy) at a time. The models are well trained on such things and are much faster at deciphering what all the random branches and nested bits and pieces do.

But complex codebases actually usually arise because of changing business requirements, changing market conditions, and iteration on features and offerings. Execution quality of this varies but a "properly architected and documented codebase" is rare in any industry with (a) competitive pressure and (b) tolerance for occasional bugs. LLMs do not make the need to serve those varied business goals go away, nor do they remove the competitive pressure to move rapidly vs gardening your codebase.

And if you're working in an area with extreme quality requirements that have forced you into doing more internal maintenance and better codebase hygiene then you find yourself with very different problems with unleashing LLMs into that code. Most of your time was never spent writing new features anyway, and LLM-driven insight into rare or complex bugs, interactions, and performance still appears quite hit or miss. Sometimes it saves me a bunch of time. Sometimes it goes in entirely wrong directions. Asking it to make major changes, vs just investigate/explain things, has an even lower hit rate.

I'm stating that a lack of codebase hygiene introduces context rot and substantially reduces the efficacy of working with an LLM.

Too wide of surface area in one context also causes efficiency issues. Lack of definition in context and you'll get less lower quality results.

Do keep in mind the code being read and written is intrinsically added to context.

In a sense I agree. I don't necessarily think that it has to be the case, but I got that same feeling of that it was wearing a white lab coat to be a scientist. I think their honest attempt was to express the relationship of how they perceive things.

I think this could still be used as a valuable form of communication if you can clearly express the idea that this is representing a hypothesis rather than a measurement. The simplest would be to label the graphs as "hypothesis". but a subtle but easily identifiable visual change might be better.

Wavy lines for the axis spring to mind as an idea to express that. I would worry about the ability to express hypotheses about definitive events that happen when a value crosses an axis though, You'd probably want a straight line for that. Perhaps it would be sufficient to just have wavy lines at the ends of the axes beyond the point at which the plot appears.

Beyond that. I think the article presumes the flattening of the curve as mastery is achieved. I'm not sure that's a given, perhaps it seems that way because we evaluate proportional improvement, implicitly placing skill on a logarithmic scale.

I'd still consider the post from the author as being done in better faith than the economist links.

Id like to know what people think, and for them to say that honestly. If they have hard data, they show it and how it confirms their hypothesis. At the other end of the scale is gathering data and only exposing the measurements that imply a hypothesis that you are not brave enough to state explicitly.

The graphic has four studies that show increased inequality and six that show reduced inequality.

> The graphic has four studies that show increased inequality

Three, since Toner-Rodgers 2024 currently seems to be a total fabrication.

https://archive.is/Ql1lQ

Read my comment again. keyword here is "recent". The second link also expands on why it's relevant. It's best to read the whole article, but here's a paragraph that captures the argument:

>The shift in recent economic research supports his observation. Although early studies suggested that lower performers could benefit simply by copying AI outputs, newer studies look at more complex tasks, such as scientific research, running a business and investing money. In these contexts, high performers benefit far more than their lower-performing peers. In some cases, less productive workers see no improvement, or even lose ground.

All of the studies were done 2023-2024 and are not listed in order that they were conducted. The studies showing reduced equality all apply to uncommon tasks like material discovery and debate points, whereas the ones showing increased equality are broader and more commonly applicable, like writing, customer interaction, and coding.

>All of the studies were done 2023-2024 and are not listed in order that they were conducted

Right, the reason why I pointed out "recent" is that it's new evidence that people might not be aware of, given that there were also earlier studies showing AI had the opposite effect on inequality. The "recent" studies also had varied methodology compared to the earlier studies.

>The studies showing reduced equality all apply to uncommon tasks like material discovery and debate points

"Debating points" is uncommon? Maybe not everyone was in the high school debate club, but "debating points" is something that anyone in a leadership position does on a daily basis. You're also conveniently omitting "investment decisions" and "profits and revenue", which basically everyone is trying to optimize. You might be tempted to think "Coding efficiency" represents a high complexity task, but the abstract says the test involved "Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible". The same is true of the task used in the "legal analysis" study, which involved drafting contracts or complaints. This seems exactly like the type of cookie cutter tasks that the article describes would become like cashiers and have their wages stagnate. Meanwhile the studies with negative results were far more realistic and measured actual results. Otis et al 2023 measured profits and revenue of actual Kenyan SMBs. Roldan-Mones measured debate performance as judged by humans.

> Right, the reason why I pointed out "recent" is that it's new evidence that people might not be aware of, given that there were also earlier studies showing AI had the opposite effect on inequality.

Okay, well the majority of this "recent" evidence agrees with the pre-existing evidence that inequality is reduced.

> "Debating points" is uncommon?

Yes. That is nobody's job. Maybe every now and then you might need to come up with some arguments to support a position, but that's not what you get paid to do day to day.

> You're also conveniently omitting "investment decisions" and "profits and revenue", which basically everyone is trying to optimize.

Very few people are making investment decisions as part of their day to day job. Hedge funds may experience increasing inequality, but that kinda seems on brand.

On the other hand "profits and revenue" is not a task.

> You might be tempted to think "Coding efficiency" represents a high complexity task, but the abstract says the test involved "Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible". The same is true of the task used in the "legal analysis" study, which involved drafting contracts or complaints.

These sound like real tasks that a decent number of people have to do on a regular basis.

> Meanwhile the studies with negative results were far more realistic and measured actual results. Otis et al 2023 measured profits and revenue of actual Kenyan SMBs. Roldan-Mones measured debate performance as judged by humans.

These sound like niche activities that are not widely applicable.

[deleted]

Thanks for the links. That should be obvious to anyone who believes that $70 billion datacenters (Meta) are needed and the investment will be amortized by subscriptions (in the case of Meta also by enhanced user surveillance).

The means of production are in a small oligopoly, the rest will be redundant or exploitable sharecroppers.

(All this under the assumption that "AI" works, which its proponents affirm in public at least.)

Yup. As a retired mathematician who craves the productivity of an obsessed 28 year old, I've been all in on AI in 2025. I'm now on Claude's $200/month Max plan in order to use Claude Code Opus 4 without restraint. I still hit limits, usually when I run parallel sessions to review a 57 file legacy code base.

For a time I refused to talk with anybody or read anything about AI, because it was all noise that didn't match my hard-earned experience. Recently HN has included some fascinating takes. This isn't one.

I have the opinion that neurodivergents are more successful using AI. This is so easily dismissed as hollow blather, but I have a precise theory backing this opinion.

AI is a giant association engine. Linear encoding (the "King - Man + Woman = Queen" thing) is linear algebra. I taught linear algebra for decades.

As I explained to my optometrist today, if you're trying to balance a plate (define a hyperplane) with three fingers, it works better if your fingers are farther apart.

My whole life people have rolled their eyes when I categorize a situation using analogies that are too far flung for their tolerances.

Now I spend most of my time coding with AI, and it responds very well to my "fingers farther apart" far reaching analogies for what I'm trying to focus on. It's an association engine based on linear algebra, and I have an astounding knack for describing subspaces.

AI is raising the ceiling, not the floor.

Can you explain your finger analogy a little more? What do the fingers represent?

Would you sit on a stool with legs three inches apart?

For a statistician, determining a plane from three approximate points on the plane is far more accurate if the points aren't next to each other.

When we offer examples or associations in a prompt, we experience a similar effect in coaxing a response from AI. This is counter-intuitive.

I'm fully aware that most of what I post on HN is intended for each future AI training corpus. If what I have to say was already understood I wouldn't say it.

> Now I spend most of my time coding with AI, and it responds very well to my "fingers farther apart" far reaching analogies for what I'm trying to focus on.

If you made analogies based on Warhammer 40k or species of mosquitoes it would have reacted exactly the same.

Maybe for you.

[dead]

> inequality

It's free for everyone with a phone or a laptop.