Well, at least they are paving the way to more efficient hardware, GPU are way, way more energy efficient than CPU and the parallel architectures are the only remaining way to scale compute.
But yes, a lot of energy wasted in the growing phase.
Well, at least they are paving the way to more efficient hardware, GPU are way, way more energy efficient than CPU and the parallel architectures are the only remaining way to scale compute.
But yes, a lot of energy wasted in the growing phase.
GPUs are different than CPUs.
They’re way more efficient at matmuls, but start throwing branching logic at them and they slow down a lot.
Literally a percentage of their cores will noop while others are executing a branch, since all cores are lockstep.
>But yes, a lot of energy wasted in the growing phase.
Why exactly is energy wasted during this phase?
Are you expecting hardware to become obsolete much faster? But that only depends on TSMC's node cadence, which is still 2-3 years. Therefore, AI hardware will still be bound to TSMC's cadence.