There is "local AI" which is running on consumer grade hardware and "local AI" which still needs a datacenter (DeepSeek 4, GLM 4.7, etc). If you woke up tomorrow and could only use the latter you are about 6 months behind the frontier, if you have to rely on the former you are 2 or 3 years behind.

All these tricks like quantization and speculative decoding can also be used by the leading AI labs, which means they will simply have more compute than you at the end of the day. So far this has translated into better performance.

Nothing released so far inherently "needs" a datacenter, it's just a matter of how much performance you require. Slow, high-latency inference will be a natural way to run "datacenter" models locally.

Yes it does. You will not be able to run models like DeepSeek v4 (>1.5 trillion parameters) on a regular workstation any time soon, unless by "slow" you mean "unusable". And those are the models that are ~6 months behind Opus 4.7.

[dead]