>The Schneider Electric report estimates that all generative AI queries consume 15 TWh in 2025 and will use 347 TWh by 2030; that leaves 332 TWh of energy—and compute power—that will need to come online to support AI growth. T
+332TW is like... +1% of US power consumption, or +8% of US electricity. If AI bubble burst ~2030... that's functionally what US will be left with (assuming new power infra actually built) mid/long term since compute depreciates 1-5 years. For reference dotcom burst left US was a fuckload of fiber layouts that lasts 30/40/50+ years. Still using capex from railroad bubble 100 years ago. I feel like people are failing to grasp how big of a F US will eat if AI bursts relative to past bubbles. I mean it's better than tulip mania, but obsolete AI chips also closer to tulips than fiber or rail in terms of stranded depreciated assets.
Honestly the excess fiber put in the ground back then has nothing to do with fiber capacity now. The same fiber back then that could carry 100mbs can use new transceivers that push a terabit
Current network capacity runs off excess fiber. Networking gets upgraded at nodes and dumb pipe (excess fiber) efficiency improves. Like we upgrade switching for 100+ year old rail to improve freight efficiency. Much of $$$$$ / capex "wasted" in dot bubble boom went to civil engineering - digging trenches to build out "agnostic" fiber conduits with multi decade life span that can be repuporsed for general use.
Bulk of AI capex build out is going to be in specialized hardware and data centers with bespoke power / cooling / networking profile. If current LLM approaches turn out to be deadend the entire data centre is potentially stranded asset unless future applications can specifically take advantage. But there's a good chance _if_ LLM crashes, then it might be due to something inherently wrong with current approach, i.e. compute/cost doesn't make commercial sense, and resuing stranded data centers might not make economic sense.