LeCun has been pushing world models and joint embedding predictive architectures (JEPA) for years now as an alternative to the generative pretraining paradigm. The core bet — that you need learned abstract representations of physical dynamics rather than just next-token prediction — is compelling, but $1B is a lot of capital to validate an architecture that still hasn't demonstrated clear advantages over scaling what already works. The interesting question is whether this funding lets them finally show JEPA-style approaches outperforming autoregressive models on tasks requiring genuine physical reasoning, or if the money just gets absorbed into the same GPU scaling game everyone else is playing.