LeCun has had every advantage imaginable — and the scoreboard remains empty.

He joined Facebook (now Meta) in December 2013. That's over 12 years of access to one of the largest AI labs in the world, near-unlimited compute, and some of the best researchers money can buy.

He introduced I-JEPA in 2023, nearly 3 years ago. It was supposed to represent a fundamental shift in how machines learn — moving beyond generative models toward a deeper, more structured world understanding.

And yet: I-JEPA hasn't decisively beaten existing models on any major benchmark. No Meta product uses JEPA as a core approach. The research community hasn't adopted it — the field keeps pushing on LLMs and diffusion models. There's been no "GPT moment" for JEPA, no single result that made its value obvious to everyone.

So the question becomes simple: how many years, how many resources, and how many failed proof-of-concepts does it take before we're allowed to judge whether an idea actually works?

First, believe it or not, 3 years is not that long. It's also not a given that LeCun was given the resources he needed to work on this tech at Meta. Zuck wanted another llama.

Second, AMI Labs just secured a billion in funding, and while that's a lot of money, it's literally just a fraction of the yearly salary they are paying to Wang. Big tech companies are literally throwing tens of billions to keep doing the same thing, just on a bigger scale. Why not try something else once in a while?