It's hard to comprehend the scale of these investments. Comparing them to notable industrial projects, it's almost unbelievable.

Every week in 2026 Google will pay for the cost of a Burj Khalifa. Amazon for a Wembley Stadium.

Facebook will spend a France-England tunnel every month.

I have been having this conversation more and more with friends. As a research topic, modern AI is a miracle, and I absolutely love learning about it. As an economic endeavor, it just feels insane. How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

I have to admit I'm flip-flopping on the topic, back and forth from skeptic to scared enthusiast.

I just made a LLM recreate a decent approximation of the file system browser from the movie Hackers (similar to the SGI one from Jurassic park) in about 10 minutes. At work I've had it do useful features and bug fixes daily for a solid week.

Something happened around newyears 2026. The clients, the skills, the mcps, the tools and models reached some new level of usefulness. Or maybe I've been lucky for a week.

If it can do things like what I saw last week reliably, then every tool, widget, utility and library currently making money for a single dev or small team of devs is about to get eaten. Maybe even applications like jira, slack, or even salesforce or SAP can be made in-house by even small companies. "Make me a basic CRM".

Just a few months ago I found it mostly frustrating to use LLM's and I thought the whole thing was little more than a slight improvement over googling info for myself. But the past week has been mind-blowing.

Is it the beginning of the star trek ship computer? If so, it is as big as the smartphone, the internet, or even the invention of the microchip. And then the investments make sense in a way.

The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

My team of 6 people has been building a software to compete with an already established piece of software written by a major software corporation. I'm not saying we'll succeed, I'm not saying we'll be better nor that we will cover every corner case they do and that they learned over the past 30 years. But 6 senior devs are getting stuff done at an insane pace. And if we can _attempt_ to do this, which would have been unthinkable 2 years ago, I can only wonder what will happen next.

> My team of 6 people has been building a software to compete with an already established piece of software written by a major software corporation.

How long until that the devs at that major corporation start using an LLM? You think your smaller team can still compare to their huge team?

If the goal is to simply undercut the incumbent with roughly the same product than it doesn't really matter if the incumbent starts using LLMs too as their cost structure, margin expectations, etc. are already relatively set.

Yeah I’m curious how much the moat of big software companies will shrink over the next few years. How long before I can ask a chatbot to build me a windows-like OS from scratch (complete with an office suite) and it can do a reasonable job?

And what happens then? Will we stop using each others code?

I agree with you, and share the experience. Something changed recently for me as well, where I found the mode to actually get value from these things. I find it refreshing that I don't have to write boilerplate myself or think about the exact syntax of the framework I use. I get to think about the part that adds value.

I also have the same experience where we rejected a SAP offering with the idea to build the same thing in-house.

But... aside from the obvious fact that building a thing is easier than using and maintaining the thing, the question arose if we even need what SAP offered, or if we get agents to do it.

In your example, do you actually need that simple CRM or maybe you can get agents to do the thing without any other additional software?

I don't know what this means for our jobs. I do know that, if making software becomes so trivial for everyone, companies will have to find another way to differentiate and compete. And hopefully that's where knowledge workers come in again.

Exactly. I hear this "wow finally I can just let Claude work on a ticket while I get coffee!" stuff and it makes me wonder why none of these people feel threatened in any way?

And if you can be so productive, then where exactly do we need this surplus productivity in software right now when were no longer in the "digital transformation" phase?

I don't feel threatened because no matter how tools, platforms and languages improved, no matter how much faster I could produce and distribute working applications, there has never been a shortage of higher level problems to solve.

Now if the only thing I was doing was writing code to a specification written by someone else, then I would be scared, but in my quarter century career that has never been the case. Even at my first job as a junior web developer before graduating college, there was always a conversation with stakeholders and I always had input on what was being built. I get that not every programmer had that experience, but to me that's always been the majority of the value that software developers bring, the code itself is just an implementation detail.

I can't say that I won't miss hand-crafting all the code, there certainly was something meditative about it, but I'm sure some of the original ENIAC programmers felt the same way about plugging in cables to make circuits. The world of tech moves fast, and nostalgia doesn't pay the bills.

> there has never been a shortage of higher level problems to solve.

True, but whether all those problems are SEEN worth chasing business wise is another matter. Short term is what matters most for individuals currently in the field, and short term is less devs needed which leads to drop in salaries and higher competition. You will have a job but if you explore the job market you will find it much harder to get a job you want at the salary you want without facing huge competition. At the same time, your current employer might be less likely to give you salary raises because they know you bargaining power has decreased due to the job market conditions.

Maybe in 40 years time, new problems will change the job market dynamics but you will likely be near retirement by then

Smart devs know this is the beginning of the end of high paying dev work. Once the LLM's get really good, most dev work will go to the lowest bidder. Just like factory work did 30 years ago.

Then whats the smart dev plan, sit on the vibe coding casino until the bossman calls you into the office?

Make as much money as you can while you still can before the bottom falls out. Or go work for one of the AI companies on AI. Always better to sell picks and shovels than dig for gold. Eventually the gold runs out where you are.

Exactly, it will be a CodeUber, we just pick the task from the app and deliver the results ))

I thought AI would already automate that part, I expect to actually just drive an actual uber

Lots of dreamers here, yet Vanguard reports 4x job and wages growth in the 100 jobs most exposed to AI

Bit naive to think that positive pattern will hold for the next ten years or so or whatever time is left between now and your retirement. And arguably, the later that positive pattern changes is worse for you because retraining as an older person has its own challenges.

Oh please, SAP doesn't exist only because writing software is not free or cheap

Yeah I’m having a similar experience. I’ve been wanting a standard test suite for JMAP email servers, so we can make sure all created jmap servers implement the (somewhat complex) spec in a consistent manner. I spent a single day prompting Claude code on Friday, and walked away with about 9000 lines of code, containing 300 unit tests for jmap servers. And a web interface showing the results. It would have taken me at least a week or two to make something similar by hand.

There’s some quality issues - I think some of the tests are slightly wrong. We went back and forth on some ambiguities Claude found in the spec, and how we should actually interpret what the jmap spec is asking. But after just a day, it’s nearly there. And it’s already very useful to see where existing implementations diverge on their output, even if the tests are sometimes not correctly identifying which implementation is wrong. Some of the test failures are 100% correct - it found real bugs in production implementations.

Using an AI to do weeks of work in a single day is the biggest change in what software development looks like that I’ve seen in my 30+ year career. I don’t know why I would hire a junior developer to write code any more. (But I would hire someone who was smart enough to wrangle the AI). I just don’t know how long “ai prompter” will remain a valuable skill. The AIs are getting much better at operating independently. It won’t be long before us humans aren’t needed to babysit them.

> The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

I'm not a professional programmer, but I am the I.T. department for my wife's small office. I used ChatGPT recently (as a search engine) to help create a web interface for some files on our intranet. I'm sure no one in the office has the time or skills to vibe code this in a reasonable amount of time. So I'm confident that my "job" is secure :)

> Im sure no one in the office has the time or skills to vibe code.

the thing you are describing can be vibe coded by anyone. Its not that teachers or nurses are gonna start vibecoding tmrw, but the risk comes from other programmers outworking you to show off to the boss. Or companies pitting devs against each other, or them mistakenly assuming they require very few programmers, or PMs suddenly start vibe coding when threatened for their jobs.

Not many. Money is not a perfect abstraction. The raw materials used to produce 100B worth of Nvidia chips will not yield you many hospitals. AI researcher with 100M singup bonus from Meta ain't gonna lay you much brick.

It's not about the consumption of raw materials or repurposing of the raw materials used for chips. peterlk said:

> How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

It's about using the money for to build things that we actually need and that have more long term utility. No one expects someone with a 100M signing bonus at Meta to lay bricks, but that 100M could be used to buy a lot of bricks and pay a lot of brick layers to build hospitals.

I think it's a mistake to believe that this money would exist if it was to be spent on these things. The existence of money is largely derived from society scale intention, excitement or urgency. These hospitals, machine shops, etc, could not manifest the same amount of money unless packaged as an exciting society scale project by a charismatic and credible character. But AI, as an aggregate, has this pull and there are a few clear investment channels in which to pour this money. The money didn't need to exist yesterday, it can be created by pulling a loan from (ultimately) the Fed.

Seems like the main issue is that taxes in America are far too low.

I mean, you're just talking about spending money. Google isn't trying to build data centers for fun. These massive outlays are only there because the folks making them think they will make much more money than they spend.

> How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build

“We?”

This isn’t “our” money.

If you buy shares, you get a voice.

FWIW the models aren't thrown away. The weights are used to preinit the next foundation model training run. It helps to reuse weights rather than randomize them even if the model has a somewhat different architecture.

As for the rest, constraint on hospital capacity (at least in some countries, not sure about the USA) isn't money for capex, it's doctors unions that restrict training slots.

There is a certain logic to it though. If the scaling approaches DO get us to AGI, that's basically going to change everything, forever. And if you assume this is the case, then "our side" has to get there before our geopolitical adversaries do. Because in the long run the expected "hit" from a hostile nation developing AGI and using it to bully "our side" probably really dwarfs the "hit" we take from not developing the infrastructure you mentioned.

Any serious LLM user will tell you that there's no way to get from LLM to AGI.

These models are vast and, in many ways, clearly superhuman. But they can't venture outside their training data, not even if you hold their hand and guide them.

Try getting Suno to write a song in a new genre. Even if you tell it EXACTLY what you want, and provide it with clear examples, it won't be able to do it.

This is also why there have been zero-to-very-few new scientific discoveries made by LLM.

Most humans aren't making new scientific discoveries either, are they? Does that mean they don't have AGI?

Intelligence is mostly about pattern recognition. All those model weights represent patterns, compressed and encoded. If you can find a similar pattern in a new place, perhaps you can make a new discovery.

One problem is the patterns are static. Sooner or later, someone is going to figure out a way to give LLMs "real" memory. I'm not talking about keeping a long term context, extending it with markdown files, RAG, etc. like we do today for an individual user, but updating the underlying model weights incrementally, basically resulting in a learning, collective memory.

Can most people venture outside their training data?

Are you seriously comparing chips running AI models and human brains now???

Last time I checked the chips are not rewiring themselves like the brain does, nor does even the software rewrite itself, or the model recalibrate itself - anything that could be called "learning", normal daily work for a human brain.

Also, the models are not models of the world, but of our text communication only.

Human brains start by building a model of the physical world, from age zero. Much later, on top of that foundation, more abstract ideas emerge, including language. Text, even later. And all of it on a deep layer of a physical world model.

The LLM has none of that! It has zero depth behind the words it learned. It's like a human learning some strange symbols and the rules governing their appearance. The human will be able to reproduce valid chains of symbols following the learned rules, but they will never have any understanding of those symbols. In the human case, somebody would have to connect those symbols to their world model by telling them the "meaning" in a way they can already use. For the LLM that is not possible, since it doesn't habe such a model to begin with.

How anyone can even entertain the idea of "AGI" based on uncomprehending symbol manipulation, where every symbol has zero depth of a physical world model, only connections to other symbols, is beyond me TBH.

In some ways no, because to learn something you have to LEARN that then thats in the training data. But humans can do it continuously and sometimes randomly, and also being without prompted.

If you're a scientist -- and in many cases if you're an engineer, or a philosopher, or even perhaps a theologian -- your job is quite literally to add to humanity's training data.

I'd add that fiction is much more complicated. LLMs can clearly write original fiction, even if they are, as yet, not very good at it. There's an idea (often attributed to John Gardner or Leo Tolstoy) that all stories boil down to one of two scenarios:

> "A stranger comes to town."

> "A person goes on a journey."

Christopher Booker wrote that there are seven: https://en.wikipedia.org/wiki/The_Seven_Basic_Plots

So I'd tentatively expect tomorrow's LLMs to write good fiction along those well-trodden paths. I'm less sanguine about their applications in scientific invention and in producing original music.

Yes, they can.

Ever heard of creativity?

I mean yeah, but that's why there are far more research avenues these days than just pure LLMs, for instance world models. The thinking is that if LLMs can achieve near-human performance in the language domain then we must be very close to achieving human performance in the "general" domain - that's the main thesis of the current AI financial bubble (see articles like AI 2027). And if that is the case, you still want as much compute as possible, both to accelerate research and to achieve greater performance on other architectures that benefit from scaling.

How does scaling compute does not go hand-in-hand with energy generation? To me, scaling one and not the other puts a different set of constraints on overall growth. And the energy industry works at a different pace than these hyperscalars scaling compute.

The other thing here is we know the human brain learns on far less samples than LLMs in their current form. If there is any kind of learning breakthrough then the amount of compute used for learning could explode overnight

Here's hoping you are chinese, then.

Why?

Well, I tried to specifically frame it in a neutral way, to outline the thinking that pretty much all the major nations / companies currently have on this topic.

Remember the good old days of complaining about Bitcoin taking the energy output of a whole town.

It has never _not_ been time to build all the power plants we can environmentally afford.

More power enables higher quality of living and more advanced civilization. It will be put to use doing something useful, or at the very worst it'll make doing existing useful things less expensive opening them up to more who would like those things.

> It has never _not_ been time to build all the power plants we can environmentally afford.

The US' challenge is that new energy sources are waiting 3-5 years before they can connect to the grid.

refs: https://kagi.com/search?q=how+long+are+new+energy+sources+wa...

I'm a simple man, I just want these companies to pay taxes where they make money.

> I'm a simple man, I just want these companies to pay taxes where they make money.

The folks who bankroll elections work tirelessly to insure this doesn't happen.

Haters will say Sora wasn't worth it.

Incredible how quickly that moment passed. Four months on it's barely clinging to the App Store top 100, below killer apps such as Gossip Harbor®: Merge & Story.

Ok I’ll bite. Was it worth it? What have people missed that haven’t used it.

[deleted]

What is Sora?

One of many AI video generators that are filling up all video social media with rage bate garbage.

If you see a video of a cat running away from a police traffic stop

Inflation adjusted?

Seems to be.

Facebook's investment is to be $135bn [1]. The Channel Tunnel was "in 1985 prices, £4.65 billion", which is £15.34bn in December 2025 [3], or $20.5bn at current exchange rate.

That is staggering.

(I didn't check the other figures.)

[1] https://www.bbc.com/news/articles/cn8jkyk78gno

[2] https://en.wikipedia.org/wiki/Channel_Tunnel

[3] https://www.bankofengland.co.uk/monetary-policy/inflation/in...

It's incredibly sad and depressing. We could be building green energy, parks, public transit, education, healthcare.

Not really your point but I think the skills to create these things are much slower to train than producing chips and data centres.

So they couldn't really build any of these projects weekly since the cost of construction materials / design engineers / construction workers would inflate rapidly.

Worth keeping in mind when people say "we could have built 52 hospitals instead!" or similar. Yes, but not really... since the other constraints would quickly reveal themselves

So, now you understand you can’t compare things with how much they cost

[dead]