What will be the advantage of having a bunch of obsolete hardware? All I see is more e-waste.

>What will be the advantage of having a bunch of obsolete hardware? All I see is more e-waste.

The energy build out, data centers are not wasted. You can swap out A100 GPUs for BH200 GPUs in the same datacenter. A100s will be 5 years old when Blackwell is out - which is just about right for how long datacenter chips are expected to last.

I do, however, think that the industry will move to newer hardware faster to try to squeeze as much efficiency as possible due to the energy bottleneck. Therefore, I expect TSMC's N2 nodes to have huge demand. In fact, TSMC themselves have said designs for N2 far outnumber N3 at the same stage of the node. This is most likely due to AI companies that want to increase efficiency due to the lack of electricity.

I don't think the parent was specifically referring to hardware alone. The 'rails' in this context are also the AI algorithms and the underlying software. New research and development could lead to breakthroughs that allow us to use significantly less hardware than we currently do. Just as the dot-com crash wasn’t solely about the physical infrastructure but also about protocols like HTTP, I believe the AI boom will involve advancements beyond just hardware. There may be short-term excess, but the long-term benefits, particularly on the software side, could be immense.

That's my big worry. The Internet was made with an idea of being a common. LLM's are very much built with mentalities of trade secrets, from their data aquisition to the algorithms. I don't think such techniques will proliferate for commercial use as easily when the bubble bursts.

Well, at least they are paving the way to more efficient hardware, GPU are way, way more energy efficient than CPU and the parallel architectures are the only remaining way to scale compute.

But yes, a lot of energy wasted in the growing phase.

GPUs are different than CPUs.

They’re way more efficient at matmuls, but start throwing branching logic at them and they slow down a lot.

Literally a percentage of their cores will noop while others are executing a branch, since all cores are lockstep.

>But yes, a lot of energy wasted in the growing phase.

Why exactly is energy wasted during this phase?

Are you expecting hardware to become obsolete much faster? But that only depends on TSMC's node cadence, which is still 2-3 years. Therefore, AI hardware will still be bound to TSMC's cadence.

In the case of dark fiber the hardware was fine, wavelength division multiplexing was created which increased capacity by 100x in some cases crashing demand for new fiber.

I think OP is suggesting AI algorithms and training methods will be improve resulting in enormous performance gains with existing hardware causing a similar surplus of infrastructure and crash in demand.

How much of current venture spending is going into reusable R&D that can be moved forward in time the way that physical infrastructure in their examples were able to be used in the future?

Considering that models have been getting more powerful for the same number of parameters -- all of it.

That... is not relevant. The question is what percentage of R&D spend gets "encoded" into something that can survive the dissolution of its holding company and how much does a transfer to a new owner depreciate it.

I'd be shocked if more than like 20% of the VC money going into it would come out the other side during such an event.

Even if the hardware quickly becomes outdated, I'm not sure it'll become worthless so quickly. And there's also the infrastructure of the data center and new electricity generation to power them. Another thing that might survive a crash and carry on to help the future is all the code used to support valuable use cases.

All those future spare GPUs will make video game streaming dirt cheap.

Even poor people can enjoy 8k gaming on a phone soon.

Most data center GPUs do not have game rendering hardware in them.

Do you expect better hardware to suddenly start appearing on the market, fully-formed from the brow of Zeus?

Maybe the bust will be so rough that TSMC will go out of business, and then these graphics cards will not go obsolete for quite a while.

Like Intel and Samsung might make a handful of better chips or whatever, but neither of their business models really involve being TSMC. So if the bubble pop took out TSMC, there wouldn’t be a new TSMC for a while.