Meanwhile generative video doesn’t need to be adopted by large companies and the demand still needs 4-5x as many GPUs and compute power then exists today, as nobody is offering 4k video at 60fps yet

That’s one use case alone

So although this may be indicative of how much text inference you’ll need, or what you’ll hear about it on the job, it doesn’t have much to do with the actual AI sector or semiconductor sector yet