The real question is whether the boom is, economically, a mistake.

If AI is here to stay, as a thing that permanently increases productivity, then AI buying up all the electricians and network engineers is a (correct) signal. People will take courses in those things and try to get a piece of the winnings. Same with those memory chips that they are gobbling up, it just tells everyone where to make a living.

If it's a flash in a pan, and it turns out to be empty promises, then all those people are wasting their time.

What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.

"If X is here to stay, as a thing that permanently increases productivity" - matches a lot of different X. Maintaining persons health increases productivity. Good education increases productivity. What is playing out now is completely different - it is both irresistible lust for omniscient power provided by this technology ("mirror mirror on the wall, who has recently thought bad things about me?"), and the dread of someone else wielding it.

Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things

> Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things

I'd put intelligence in quotes there, but it doesn't detract from the point.

It is astounding to me how willfully ignorant people are being about the massive aggregation of power that's going on here. In retrospect, I don't think they're ignorant, they just haven't had to think about it much in the past. But this is a real problem with very real consequences. Sovereignty must be occasionally be asserted, or someone will infringe upon it.

That's exactly what's happening here.

>massive aggregation of power that's going on here

Which has been happening since what at least the bad old IBM days and nobody's done a thing about it?

I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.

> It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.

it's much worse. a great demographic of hacker news love gen. AI.. these are usually highly educated people showing their true faces on the plethora of problems this technology violates and generates

>I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.

Especially at cost of diverting power and water for farmers and humans who need them. And the benefit of the AI seems quite limited from recent Signal post here on HN.

Water for farmers is its own pile of bullshit. Beef uses a stupid amount of water. Same with almonds. If you're actually worried about feeding people and not just producing an expensive economic product you're not going to make them.

Same goes for people living in deserts where we have to ship water thousands of miles.

Give me a break.

And one of my favorites, alfalfa in Arizona for Saudi Arabian horses.

Water usage must be taxed according to its use, unfortunately.

Very important. There is more than just 1 bullshit line of business.

The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental; if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas. You can push the front forward slightly with new research and innovation, but not very fast or far.

The current generation of AI is an opportunity for quick gains that go beyond just a few months longer lifespan or a 2% higher average grade. It is an unrealised and maybe unrealistic opportunity, but it's not just greed and lust for power that pushes people to invest, it's hope that this time the next big thing will make a real difference. It's not the same as investing more in schools because it's far less certain but also has a far higher alleged upside.

> The difference is that we've more or less hit a stable Pareto front in education and healthcare.

Not even close. So many parts of the world need to be pumped with target fund infusions ASAP. Only forcing higher levels of education and healthcare at the places where it lags is a viable step towards securing peaceful and prosperous nearest future.

Then why didn't that happen before GenAI was a thing?

I think some people may have to face the fact that money was never going to go there under any circumstances.

> Then why didn't that happen before GenAI was a thing?

Because there was no easy way for the people directing capital to those endeavors to make themselves richer.

Pareto is irrelevant, because they are talking about how to use all of this money not currently used in healthcare or education.

> if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas

"Marginal cost barrier" hit, then?

> The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental;

You probably mean gains between someone receiving healtcare and education now, as compared to 10 years ago, or maybe you mean year to year average across every man alive.

You certainly do not mean that person receiving appropriate healthcare is only 2% better off than one not receiving it, or educated person is noly 2% better of than an uneducated one?

Because I find such notion highly unlikely. So, here you have vast amounts of people you can mine for productivity increase, simply by providing things that exist already and are available in unlimited supply to anyone who can produce money at will. Instead, let's build warehouses and fill them with obsolete tech, power it all up using tiny Sun and .. what exactly?

This seems like a thinly disguised act of an obsessed person that will stop at nothing to satisfy their fantasies.

[deleted]

> Finally intelligence can also be controlled by capital

The relationship between capital and AI is a fascinating topic. The contemporary philosopher who has thought most intensely about this is probably Nick Land (who is heavily inspired by Eugen von Böhm-Bawerk and Friedrich Hayek). For Land, intelligence has always been immanent in capitalism and capitalism is actively producing it. As we get closer to the realization of capitalism's telos/attractor (technological singularity), this becomes more and more obvious (intelligible).

In 2024, global GDP was $111 trillion.[1] Investing 1 or 2 % of that to improve global productivity via AI does not seem exaggerated to me.

[1] https://data.worldbank.org/indicator/NY.GDP.MKTP.CD

2% is a lot! There's only fifty things you can invest 2% of GDP in before you occupy the entire economy. But the list of services people need, from food, water, shelter, heating, transportation, education, healthcare, communications, entertainment, mining, materials, construction, research, maintenance, legal services... there's a lot of things on that list. To allocate each one 1% or 2% of the economy may seem small, but pretty quickly you hit 100%.

Most of you have mentioned is not investment, but consumption. Investments means to use money to make more money in the future. Global investment rates are around 25 % of global GDP. Avarage return on investement ist about 10% per year. In other words: using 1% or 2% of GDP if its leads to an improvement in GDP of more than 0.1% or 0.2% next year would count as a success. I think to expect a productivity gain on this scale due to AI is not unrealistic for 2026.

AI is a big deal though.

I will put it differently,

Investing 1 or 2% of global GDP to increase wealth gap 50% more and make top 1% unbelievable rich while everyone else looking for jobs or getting 50 year mortgage, seem very bad idea to me.

This problem is not specific to AI, but a matter of social policy.

For example here in Germany, the Gini index, an indicator of equality/inequality has been oszillating about 29.75 +/-1.45 since 2011.[1] In other words, the wealth distribution was more or less stable in the last 15 years, and is less extrem than in the USA, where it was 41.8 in 2023.[2]

[1] https://de.statista.com/statistik/daten/studie/1184266/

[2] https://fred.stlouisfed.org/series/SIPOVGINIUSA

It can be both? Both that inequality increases but also prosperity for the lower class? I don’t mind that trade off.

If some one were to say to you - you can have 10,000 more iPhones to play with but your friends would get 100,000 iPhones, would you reject the deal?

A century ago people in the US started to tax the rich much more heavily than we do now. They didn't believe that increasing inequality was necessary - or even actually that helpful - for improving their real livelihood.

Don't be shocked if that comes back. (And that was the mild sort of reaction.)

If you have billions and all the power associated with it, why are you shooting for personal trillions instead of actually, directly improving the day to day for everyone else without even losing your status as an elite, just diminishing it by a little bit of marginal utility? Especially if you read history about when people make that same decision?

I don't think that is scalable to infinite iphones since the input materials are finite. If your all your friends get 100,000 iphones and then you need an ev battery and that now costs 20,000 iphones now and you're down 5k iphones if the previous battery cost was 5k iphones. On the other hand if you already had a good battery, then you're up 20k iphones or so in equity. Also, since everyone has so many iphones the net utility drops and they become worth less than the materials so everyone would have to scrap their iphones to liquidate at the cost of the recycled metals.

It can be, but there are lots of reasons to believe it will not be. Knowledge work was the ladder between lower and upper classes. If that goes away, it doesn't really matter if electricians make 50% more.

I guess I don’t believe knowledge work will completely go away.

Its not really a matter of some great shift. Millennials are the most educated generation by a wide margin, yet their wealth by middle age is trailing prior generations. The ladder is being pulled up inch by inch and I don't see AI doing anything other than speeding up that process at the moment.

>Both that inequality increases but also prosperity for the lower class? I don’t mind that trade off.

This sounds like it is written from the perspective of someone who sees their own prosperity increase dramatically so that they end up on the prosperous side of the worsening inequality gap. The fact that those on the other side of the gap see marginal gains in prosperity makes them feel that it all worked out okay for everyone.

I think this is greed typical of the current players in the AI/tech economy. You all saw others getting abundant wealth by landing high-paying jobs with tech companies and you want to not only to do the same, but to one-up your peers. It's really a shame that so much tech-bro identity revolves around personal wealth with zero accountability for the tools that you are building to set yourselves in control of lives of those you have chosen to either leave behind or to wield as tools for further wealth creation through alternate income SaaS subscription streams or other bullshit scams.

There really is not much difference between tech-bros, prosperity gospel grifters or other religious nuts whose only goal is to be more wealthy today than yesterday. It's created a generation of greedy, selfish narcissists who feel that in order to succeed in their industry, they need to be high-functioning autists so they take the path of self-diagnosis and become, as a group, resistant to peer review since anyone who would challenge their bullshit is doing the same thing and unlikely to want too much light shed on their own shady shit. It is funny to me that many of these tech-bros have no problem admitting their drug experimentation since they need to maintain an aura of enlightenment amongst their peers.

It's gonna be a really shitty world when the dopeheads run everything. As someone who grew up back in the day when smoking dope was something hidden and paranoia was a survival instinct for those who chose that path I can see lots of problems for society in the pipeline.

I think you inadvertently stepped in the point — Yes, what the fuck do I need 10,000 iPhones for? Also part of the problem is which resources end up in abundance. What am I going to do with more compute when housing and land are a limited resource.

Gary’s Economics talks about this but in many cases inequality _is_ the problem. More billionaires means more people investing in limited resources(housing) driving up prices.

Maybe plebes get more money too, but not enough to spend on the things that matter.

It’s just a proxy for wealth using concrete things.

If you were given 10,000 dollars but your friends were also 100,000 dollars as well, would you take the deal?

Land and housing can get costlier while other things get cheaper, making you overall more prosperous. This is what happened in the USA and most of the world. Would you take this deal?

I wouldn't be able to hang out with them as much (they'd go do a lot of higher-cost things that I couldn't afford anymore).

I'd have a shittier apartment (they'd drive up the price of the nicer ones, if we're talking about a significant sized group; if it's truly just immediate friends, then instead it's just "they'd all move further away to a nicer area").

So I'd have some more toys but would have a big loss in quality of my social life. Pass.

(If you promised me that those cracks wouldn't happen, sure, it would be great for them. But in practice, having seen this before, it's not really realistic to hold the social fabric together when economic inequality increases rapidly and dramatically.)

No you would have the same house or better. That’s part of the condition.

More to the point, what does research into notions of fairness among primates tell us about the risks of a vast number of participants deciding to take this deal?

You have to tell us the answer so we can resolve your nickname "simianwords" with regard to Poe's Law.

> If you were given 10,000 dollars but your friends were also 100,000 dollars as well, would you take the deal?

This boldly assumes the 10k actually reaches me. Meanwhile 100k payouts endlessly land as expected.

usa sources: me+kids+decade of hunger level poverty. no medical coverage for decades. homeless retirement still on the table.

I don't know how nobody has mentioned this before:

The guy with 100k will end up rewriting the rules so that in the next round, he gets 105k and you get 5k.

And people like you will say "well, I'm still better off"

In future rounds, you will try to say "oh, I can't lose 5k for you to get 115k" and when you try to vote, you won't be able to vote, because the guy who has been making 23x what you make has spent his money on making sure it's rigged.

You’re missing the point. It’s not about jealousy it’s basic economics - supply and demand. No I would not take the deal if it raised the demand in something central to my happiness (housing) driving the price up for something previously affordable and make it unaffordable.

I would not trade my house for a better iPhone with higher quality YouTube videos, and slightly more fashionable athleisure.

I don’t care how many yacht’s Elon Musk has, I care how many governments.

What if you could buy the same house as before, buy the same iPhone as before and still have more money remaining? But your house cost way way more proportionally.

If you want to claim that that's a realistic outcome you should look at how people lived in the 50s or 80s vs today, now that we've driven up income inequality by dramatically lowering top-end tax rates and reduced barriers to rich people buying up more and more real property.

What we got is actually: you can't buy the same house as before. You can buy an iPhone that didn't exist then, but your boss can use it to request you do more work after-hours whenever they want. You have less money remaining. You have less free time remaining.

Do you have source for your claim? I have source that supports what I have said - look at disposable income data from BLS

If you’re asking me if I’m an idiot who doesn’t understand basic economics / capitalism, the answer is no. If you’re asking me if I think that in the real world there are negative externalities of inequality in and of itself that makes it more complicated than “everyone gets more but billionaires get more more” than the answer is yes.

Just being born in the US already makes you a top 10% and very likely top 5-1% in terms of global wealth. The top 1% you're harping about is very likely yourself.

> Just being born in the US already makes you a top 10%

Our family learned how long-term hunger (via poverty) is worse in the US because there was no social support network we could tap into (for resource sharing).

Families not in crisis don't need a network. Families in crisis have insufficient resources to launch one. They are widely scattered and their days are consumed with trying to scrape up rent (then transpo, then utilities, then food - in that order).

And so many people in the US are already miserable before yet another round of "become more efficient and productive for essentially the same pay or less as before!!"

So maybe income equality + disposable material goods is not a good path towards people being happier and better off.

It's our job to build a system that will work well for ourselves. If there's a point where incentivizing a few to hoard even more resources to themselves starts to break down in terms of overall quality of life, we have a responsibility to each other to change the system.

Look at how many miserable-ass unhappy toxic asshole billionaires there are. We'll be helping their own mental health too.

It is not really obvious to me that happiness should be part of the social contract.

Happiness is very slippery even in your own life. It seems absurd to me that you should care about my happiness.

So much of happiness is the change from the previous state to the present. I am happy right now because 2026 has started off great for me while 2025 was a bad year.

I would imagine there was never a happier American society than the year's after WW2.

I imagine some of the most happy human societies were the ones during the years after the black plague. No one though today gains happiness because of the absence of black plague.

To believe a society can be built around happiness seems completely delusional to me.

So what? If that's the case, they clearly mean the 0.0001% or whatever number, which is way worse.

>Investing 1 or 2% of global GDP to increase wealth gap 50% more

What’s your definition of wealth gap?

Is it how you feel when you see the name of a billionaire?

It's easy to access statistics about wealth and income inequality. It is worse than it has ever been, and continuing to get worse.

https://www.pewresearch.org/social-trends/2020/01/09/trends-...

Yes the very fact that billionaires exist mean our species has failed.

I do not believe that there is a legitimate billionaire on the planet, in that they haven't engaged in stock manipulation, lobbying, insider trading, corrupt deals, monopolistic practices, dark patterns, corporate tax dodging, personal tax dodging.

You could for example say that the latter are technically legal and therefore okay, but it's my belief that they're "technically legal/loopholes" because we have reached a point where the rich are so powerful that they bend the laws to their own ends.

Our species is a bit of a disappointment. People would rather focus on trivial tribal issues than on anything that impacts the majority of the members of our species. We are well and truly animals.

It’s implied you mean that the ROI will be positive. Spending 1-2% of global GDP with negative ROI could be disastrous.

I think this is where most of the disagreement is. We don’t all agree on the expected ROI of that investment, especially when taking into account the opportunity cost.

They still gotta figure out how their consumers will get the cash to consume. Toss all the developers and a largish cohort of well-paid people head towards the dole.

Yeah I don't think this get's enough attention. It still requires a technical person to use these things effectively. Building coherent systems that solve a business problem is an iterative process. I have a hard time seeing how an LLM could climb that mountain on it's own.

I don't think there's a way to solve the issue of: one-shotted apps will increasingly look more convincing, in the same way that the image generation looks more convincing. But when you peel back the curtain, that output isn't quite correct enough to deploy to production. You could try brute-force vibe iterating until it's exactly what you wanted, but that rarely works for anything that isn't a CRUD app.

Ask any of the image generators to build you a sprite sheet for a 2d character with multiple animation frames. I have never gotten one to do this successfully in one prompt. Sometimes the background will be the checkerboard png transparency layer. Except, the checkers aren't all one color (#000000, #ffffff), instead it's a million variations of off-white and off-black. The legs in walking frames are almost never correct, etc.

And even if they get close - as soon as you try to iterate on the first output, you enter a game of whack-a-mole. Okay we fixed the background but now the legs don't look right, let's fix those. Okay great legs are fixed but now the faces are different in every frame let's fix those. Oh no fixing the faces broke the legs again, Etc.

We are in a weird place where companies are shedding the engineers that know how to use these things. And some of those engineers will become solo-devs. As a solo-dev, funds won't be infinite. So it doesn't seem likely that they can jack up the prices on the consumer plans. But if companies keep firing developers, then who will actually steer the agents on the enterprise plans?

> It still requires a technical person to use these things effectively.

I feel like few people critically think about how technical skill gets acquired in the age of LLMs. Statements like this kind of ignore that those who are the most productive already have experience & technical expertise. It's almost like there is a belief that technical people just grow on trees or that every LLM response somehow imparts knowledge when you use these things.

I can vibe code things that would take me a large time investment to learn and build. But I don't know how or why anything works. If I get asked to review it to ensure it's accurate, it would take me a considerable amount of time where it would otherwise just be easier for me to actually learn the thing. Feels like those most adamant about being more productive in the age of AI/LLMs don't consider any of the side effects of its use.

That's not something that will affect the next quarter, so for US companies it might as well be something that happens in Narnia.

> But when you peel back the curtain, that output isn't quite correct enough to deploy to production

What if, we change current production environments to fit that blackbox and make it run somehow with 99% availability and good security?

esp when it comes down to integration with the rest of the business processes & people aroud this "single apps" :-)

Like before - debt!

This prevents the consumers from slacking off and enjoying life, instead they have to continue to work work work. They get to consume a little, and work much more (after all, they also have to pay interest, and for consumer credits and credits that the masses get that adds up to a lot).

In this scenario, it does not even matter that many are unable to pay off all that debt. As long as the amount of work that is extracted from them significantly exceeds the amount of consumption allowed to them all is fine.

The chains that bind used to be metal, but we progressed and became a civilized society. Now it's the financial system and the laws. “The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.” (Anatole France)

Why do we need people to consume when we have the government?

Serious question. As in, we built the last 100 years on "the american consumer", the idea that it would be the people buying everything. There is no reason that needs to or necessarily will continue-- don't get me wrong, I kind of hope it does, but my hopes don't always predict what actually happens.

What if the next 100 is the government buying everything, and the vast bulk of the people are effectively serfs. Who HAVE to stay in line otherwise they go to debt prison or tax prison where they become slaves (yes, the US has a fairly large population of prison laborers who are forced to work for 15-50 cents/hour. The lucky ones can earn as much as $1.50/hour. https://www.prisonpolicy.org/blog/2017/04/10/wages/

where will the government get the money to buy anything if the billionaires and their mega corps have it all and spend sufficient amounts to keep the government from taxing. we have a k shape economy where the capital class is extracting all of the value from the working class who are headed to subsistence levels of income and the low class dies in the ditch.

At some point rich people stop caring about money and only care about power.

It's a fun thought, but you know what we call those people? Poor. The people who light their own money on fire today are ceding power. The two are the same.

At the end the day a medieval lord was poor but he lived a better life than the peasants.

> At the end the day a medieval lord was poor but he lived a better life than the peasants.

As measured in knowledge utilized during basic living: The lives of lords were much less complex than that of modern poor people.

1. Some people can afford to light a lot of their money on fire and still remain rich.

2. The trick is to burn other people’s money. Which is a lot more akin to what is going on here. Then, at least in the US, if you’re too big to fail, the fed will just give you more cash effectively diminishing everyone else’s buying power.

In regards to 2: it's as simple as not letting it be your money being set on fire. Every fiscally responsible individual is making sure they have low exposure to the mag 7.

I know that all investments have risk, but this is one risky gamble.

US$700 billion could build a lot of infrastructure, housing, or manufacturing capacity.

There is no shortage of money to build housing. There is an abundance of regulatory burdens in places that are desirable to live in.

Its not due to a lack of money that housing in SF is extremely expensive.

SF is not the only place where housing is expensive. There are plenty of cities where they could build more housing and they don't because it isn't profitable or because they don't have the workers to build more, not because the government is telling them they can't.

It is expensive in those other places for similar reasons as SF -- the government either tells them they can't (through zoning), or makes it very expensive (through regulation, like IZ / "affordable" housing), or limit profitability (rent control), or some combination of the above. All of these reduce the supply of new housing.

Generally the cities where housing is expensive are exactly the ones where the government is telling people they can't build (or making it very expensive to get approval). Do you have a specific example of a city such as you claim?

Which cities, for example?

> US$700 billion could build a lot of infrastructure, housing, or manufacturing capacity.

I am now 100% convinced, that the US has power to build those things, but it will not, because it means lives of ordinary people will be elevated even more, this is not what brutal capitalism wants.

If it can make top 1% richer in 10 year span vs good for everyone in 20 years, it will go with former

What $700 billion can't do is cure cancers, Parkinsons, etc. We know because we've tried and that's barely a sliver of what it's cost so far, for middling results.

Whereas $700 billion in AI might actually do that.

Your name is well earned! "can't cure cancers" is impressively counterfactual [0] as 5 year survival of cancer diagnosis is up over almost all categories. Despite every cancer being a unique species trying to kill you, we're getting better and better at dealing with them.

[0]https://www.cancer.org/research/acs-research-news/people-are...

Treating cancer is not the same as curing it. Currently, no doctor would ever tell you you are "cured", just that you are in remission.

Yes, we're getting better at treating cancers, but still if a person gets cancer, chances are good the thing they'll die of is cancer. Middling results.

Because we're not good at curing cancers, we're just good at making people survive better for longer until the cancer gets them. 5 year survival is a lousy metric but it's the best we can manage and measure.

I'm perfectly happy investing roughly 98% of my savings into the thing that has a solid shot at curing cancers, autoimmune and neurodegenerative diseases. I don't understand why all billionaires aren't doing this.

How AI will cure neurodegenerative diseases and cancer?

[deleted]

If we knew that we probably wouldn’t need AI to tell us.

But realistically: perhaps by noticing patterns we’ve failed to notice and by generating likely molecules or pathways to treatment that we hadn’t explored.

We don’t really know what causes most diseases anyway. Why does the Shingles vaccine seem to defend against dementia? Why does picking your nose a lot seem to increase risk of Alzheimer’s?

That’s the point of building something smarter than us: it can get to places we can’t get on our own, at least much faster than we could without it.

I don’t think that lack of intelligence is the bottleneck. It might be in some places, but categorically, across the board, our bottlenecks are much more pragmatic and mundane.

Consider another devastating disease: tuberculosis. It’s largely eradicated in the 1st world but is still a major cause of death basically everywhere else. We know how to treat it, lack of knowledge isn’t the bottleneck. I’d say effectively we do not have a cure for TB because we have not made that cure accessible to enough humans.

That’s a weird way to frame it. It’s like saying we don’t know how to fly because everyone doesn’t own a personal plane.

We have treatments (cures) for TB: antibiotics. Even XDR-TB.

What we don’t have is a cure for most types of cancer.

Flying is a bad example because airlines are a thing and make flying relatively accessible.

I get your point, but I don’t think it really matters. If a cure for most (or all) cancers is known but it’s not accessible to most people then it is effectively nonexistent. E.g it will be like TB.

> We have treatments (cures) for TB

TB is still one of the top 10 causes of death globally.

Maybe it should give you pause then, that not everyone else is investing 98% of their savings?

It gives me pause that most people drive cars or are willing to sit in one for more than 20 minutes a week.

But people accept the status quo and are afraid to take a moment’s look into the face of their own impending injury, senescence and death: that’s how our brains are wired to survive and it used to make sense evolutionarily until about 5 minutes ago.

> I don't understand why all billionaires aren't doing this.

I know, shocking isn’t it?

Ah, yes: "well, we can't cure cancer or autoimmune and neurodegenerative diseases, but I'm willing to invest basically all my money into a thing that's...trained on the things we know how to do already, and isn't actually very good at doing any of them."

...Meanwhile, we are developing techniques to yes, cure some kinds of cancer, as in every time they check back it's completely gone, without harming healthy tissue.

We are developing "anti-vaccines" for autoimmune diseases, that can teach our bodies to stop attacking themselves.

We are learning where some of the origins of the neurodegenerative diseases are, in ways that makes treating them much more feasible.

So you're 100% wrong about the things we can't do, and your confidence in what "AI" can do is ludicrously unfounded.

Every doctor and researcher in the world is trained on things we already know how to do already.

I’m not claiming we haven’t made a dent. I’m claiming I’m in roughly as much danger from these things right now as any human ever has been: middling results.

If we can speed up the cures by even 1%, that’s cumulatively billions of hours of human life saved by the time we’re done.

But what they can do, that AI can't, is try new things in measured, effective, and ethical ways.

And that hypothetical "billions of hours of human life saved" has to be measured against the actual damage being done right now.

Real damage to economy, environment, politics, social cohesion, and people's lives now

vs

Maybe, someday, we improve the speed of finding cures for diseases? In an unknown way, at an unknown time, for an unknown cost, and by an unknown amount.

Who knows, maybe they'll give everyone a pony while they're at it! It seems just as likely as what you're proposing.

[flagged]

There's one additional question we could have here, which is "is AI here to stay and is it net-positive, or does it have significant negative externalities"

> What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.

We've so far found two ways in recent memory that our economy massively fails when it comes to externalities.

Global Warming continues to get worse, and we cannot globally coordinate to stop it when the markets keep saying "no, produce more oil, make more CO2, it makes _our_ stock go up until the planet eventually dies, but our current stock value is more important than the nebulous entire planet's CO2".

Ads and addiction to gambling games, tiktok, etc also are a negative externality where the company doing the advertising or making the gambling game gains profit, but at the expense of effectively robbing money from those with worse impulse control and gambling problems.

Even if the market votes that AI will successfully extract enough money to be "here to stay", I think that doesn't necessarily mean the market is getting things right nor that it necessarily increases productivity.

Gambling doesn't increase productivity, but the market around kalshi and sports betting sure indicates it's on the rise lately.

AI could be here to stay and "chase a career as an electrician helping build datacenters" could also be a mistake. The construction level could plateau or decline without a bubble popping.

That's why it can't just be a market signal "go become an electrician" when the feedback loop is so slow. It's a social/governmental issue. If you make careers require expensive up-front investment largely shouldered by the individuals, you not only will be slow to react but you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.

> you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.

I think this is where we're headed, very quickly, and I'm worried about it from a social stability perspective (as well as personal financial security of course). There's probably not a single white-collar job that I'd feel comfortable spending 4+ years training for right now (even assuming I don't have to pay or take out debt for the training). Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.

Lots and lots of people who did or will do "all the right things," with no benefit earned from it. Even if hypothetically there is something new you can reskill into every five years, how is that sustainable? If you're young and without children, maybe it is possible. Certainly doesn't sound fun, and I say this as someone who joined tech in part because of how fast-paced it was.

> Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.

I'd like to see real examples of this, beyond trivial ones like low-quality copywriting (i.e. the "slop" before there was slop) that just turns into copyediting. Current AI's are a huge force multiplier for most white-collar skills, including software development.

I suspect a lot of this is due to large amounts of liquidity sloshing around looking for returns. We are still dealing with the consequences of the ZIRP (Zero Interest Rate Policy) and QE (Quantitative Easing) where money to support the economy through the Great Financial Crisis and Covid was largely funneled in to the top, causing the 'everything bubble'. The rich got (a lot) richer, and now have to find something to do with that wealth. The immeasurable returns promised by LLMs (in return for biblical amounts of investment) fits that bill very well.

[deleted]

Your comment doesn't say anything

[deleted]

If

"The real question is whether the boom is, economically, a mistake."

The answer to this is two part:

1. Have we seen an increase in capability over the last couple of years? The answer here is clearly yes.

2. Do we think that this increase will continue? This is unknown. It seems so, but we don't know and these firms are clearly betting that it will.

1a. Do we think that with existing capability that there is tremendous latent demand? If so the buildout is still rational if progress stops.

> People will take courses in those things and try to get a piece of the winnings.

The problem is boom-bust cycles. Electricians will always be in demand but it takes about 3 years to properly train even a "normal" residential electrician - add easily 2-3 years on top to work on the really nasty stuff aka 50 kV and above.

No matter what, the growth of AI is too rapid and cannot be sustained. Even if the supposed benefits of AI all come true - the level of growth cannot be upheld because everything else suffers.

> it takes about 3 years to properly train even a "normal" residential electrician

To pass ordinary wire with predefined dimensions in exposed conduits? No way it takes more than a few weeks.

I'm talking about proper German training, not the kind of shit that leads to what Cy Porter (the home inspector legend) exposes on Youtube.

Shoddy wiring can hold up for a looong time in homes because outside of electrical car chargers and baking ovens nothing consumes high current over long time and as long as no device develops a ground fault, even a lack of a GFCI isn't noticeable. But a data center? Even smaller ones routinely rack up megawatts of power here, large hyperscaler deployments hundreds of megawatts. Sustained, not peak. That is putting a lot of stress on everything involved: air conditioning, power, communications.

And for that to hold up, your neighbor Joe who does all kinds of trades as long as he's getting paid in cash won't cut it.

> What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.

I can’t speak to the economy as a whole, but the tech economy has a long history of bubbles and scams. Some huge successes, too—but gets it wrong more often than it gets it right.

> If AI is here to stay, as a thing that permanently increases productivity,

Thing is, I am still waiting to see where it increases productivity aside from some extremely small niches like speech to text and summarizing some small text very fast.

Serious question, but have you not used it to implement anything at your job? Admittedly I was very skeptical but last sprint in 2 days I got 12 pull requests up for review by running 8 agents on my computer in parallel and about 10 more on cloud VMs. The PRs are all double reviewed and QA'd and merged. The ones that don't have PRs are larger refactors, one 40K loc and the other 30k loc and I just need actual time to go through every line myself and self-test appropriately, otherwise it would have been more stuff finished. These are all items tied to money in our backlog. It would have taken me about 5 times as long to close those items out without this tooling. I also would have not had as much time to produce and verify as many unit tests as I did. Is this not increased productivity?

[deleted]

> I am still waiting to see where it increases productivity...

If you are a software engineer, and you are not using using AI to help with software development, then you are missing out. Like many other technologies, using AI agents for software dev work takes time to learn and master. You are not likely to get good results if you try it half-heartedly as a skeptic.

And no, nobody can teach you these skills in a comment in an online forum. This requires trial and error on your part. If well known devs like Linus Torvalds are saying there is value here, and you are not seeing it, then then the issue is not with the tool.

Are you doctor or a farmer?

If you are a software engineer you are missing out a lot, literally a lot!

What is he missing? Do you have anything quantitative other than an AI marketing blog or an anecdote?