> But this is not an applied AI company.

There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.

It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.

Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.

"and we didn't see anything" is not justified at all.

Meta absolutely has (or at least had) a word class industry AI lab and has published a ton of great work and open source models (granted their LLM open source stuff failed to keep up with chinese models in 2024/2025 ; their other open source stuff for thins like segmentation don't get enough credit though). Yann's main role was Chief AI Scientist, not any sort of product role, and as far as I can tell he did a great job building up and leading a research group within Meta.

He deserved a lot of credit for pushing Meta to very open to publishing research and open sourcing models trained on large scale data.

Just as one example, Meta (together with NYU) just published "Beyond Language Modeling: An Exploration of Multimodal Pretraining" (https://arxiv.org/pdf/2603.03276) which has a ton of large-experiment backed insights.

Yann did seem to end up with a bit of an inflated ego, but I still consider him a great research lead. Context: I did a PhD focused on AI, and Meta's group had a similar pedigree as Google AI/Deepmind as far as places to go do an internship or go to after graduation.

For instance, under Yann's direction Meta FAIR produced the ESM protein sequence model, which is less hyped than AlphaFold, but has been incredibly influential. They achieved great performance without using multiple alignments as an input/inductive bias. This is incredibly important for large classes of proteins where multiple alignments are pretty much noise.

I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.

Creating a startup has to be about a product. When you raise 1B, investors are expecting returns, not papers.

> Creating a startup has to be about a product. When you raise 1B, investors are expecting returns, not papers.

Speaking of returns - Apple absolutely fucked Meta ads with the privacy controls, which trashed ad performance, revenue and share price. Meta turned things around using AI, with Yann as the lead researcher. Are you willing to give him credit for that? Revenue is now greater than pre-Apple-data-lockdown

How much of Meta's increased revenue is attributed to AI? I think Meta "turned things around" by bypassing privacy controls [1].

[1] https://9to5mac.com/2025/08/21/meta-allegedly-bypassed-apple...

> I think Meta "turned things around" by bypassing privacy controls

Why would Apple be complicit on this for years?

Apple has allowed Facebook, TikTok etc. to track users across devices AND device resets via the iCloud Keychain API.

When you log into FB on any account on any device, then install FB on a new device, or even after you erase the device, they know it's you even before you log in. Because the info is tied to your Apple iCloud account.

And there's no way for users to see or delete what data other companies have stored and linked to your Apple ID via that API.

It's been like this for at least 5 years and nobody seems to care.

Is there a write up of this somewhere? Curious to read more...

None that I found. You can test it right now yourself. Install FB, log in, delete FB, reinstall FB. Your previous login info will be there.

That would be fine if users could SEE what has been stored and DELETE it WITHOUT going through the app and trusting it to show you everything honestly.

What's even worse is that it silently persists across DEVICE reinstalls.

Erase and reset your iPhone/iPad. Sign into the same iCloud account. Reinstall FB. Your login info will still be there.

Buy a new iPhone/iPad. Sign into the same iCloud account. Reinstall FB. Your login info will still be there.

And nope, no one seems to care.

>> but he had access to many more resources in Meta, and we didn't see anything

> I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.

You were criticising his output at Facebook, though, but he was in the research group at facebook, not a product group, so it seems like we did actually see lots of things?

they are not expecting returns at 1B+, just for some one to pay more than they paid six months ago

They're expecting what you promised them when they handed over the money. That is "more money" for most investors but that isn't the sole universal human objective. Money has to serve an instrumental purpose and if one of your purposes is something that can't currently be achieved, simply getting more money won't help. You need to give that money to some venture that might actually be able to achieve it. I have no doubt there are at least a few very rich people out there who just have sci-fi nerd dreams and want to see someone go to Mars, go to Jupiter, discover alien life, rebuild dinosaurs, or create a truly autonomous entirely new form of artificial life just to see if they can. If it makes money, great. If it doesn't, what else was I going to do? Die with $60 billion in the bank instead of $40 billion?

> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.

That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.

> I sincerely wish we will see more competition

I really wish we don't, science isn't markets.

> Understanding world through videos

The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.

Do not keep bad results in context. You have to purge them to prevent them from effecting the next output. LLMs deceptively capable, but they don’t respond like a person. You can’t count on implicit context. You can’t count on parts of the implicit context having more weight than others.

Most folks get paid a lot more in a corporate job than tinkering at home - using the 'follow the money' logic it would make sense they would produce their most inspired works as 9-5 full stack engineers.

But often passion and freedom to explore are often more important than resources

> It could be a management issue, though

Or, maybe it's just hard?

That's such a terrible take.

For a hot minute Meta had a top 3 LLM and open sourced the whole thing, even with LeCunn's reservations around the technology.

At the same time Meta spat out huge breakthroughs in:

- 3d model generation

- Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.

- A whole new class of world modeling techniques (JEPAs)

- SAM (Segment anything)

> - Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.

If it was a breakthrough, why did Meta acquire Wang and his company? I'm genuinely curious.

People make stupid acquisitions all of the time.

Wang fits the profile of a possible successor ceo for meta. Young, hit it big early, hit the ai book early straight out of college. Obviously not woke (just look at his public statements).

Unfotunately the dude knows very little about ai or ml research. He's just another wealthy grifter.

At this point decision making at Meta is based on Zuckerberg's vibes, and i suspect the emperor has no clothes.

this is absolutely an applied ai company, the only question is whether the applied AI will be subordinated to the research

In an interview, Yann mentioned that one reason he left Meta was that they were very focused on LLMs and he no longer believed LLMs were the path forward to reaching AGI.

> we didn't see anything.

Is it a troll? Even if we just ignore Llama, Meta invented and released so many foundational research and open source code. I would say that the computer vision field would be years behind if Meta didn't publish some core research like DETR or MAE.

You should ignore Llama because by his own admission,

>My only contribution was to push for Llama 2 to be open sourced.

He founded the team that worked on fasttext, llama and other similarly impactful projects.

Did he work on those vision models?

llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.

Llama wasn’t Yann LeCun’s work and he was openly critical of LLMs, so it’s not very relevant in this context.

Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.

He founded FAIR and the team in Paris that ultimately worked on the early Llama versions.

FAIR was founded in 2015 and Llama's first release was in 2023. Musk co-founded OpenAI in 2015 but no reasonable person credits ChatGPT in 2022 to him.

> My only contribution was to push for Llama 2 to be open sourced.

Quite a big contribution in practice.

Sure, but I don't that's relevant in a startup with 1B VC money either. Meta can afford to (attempt to) commoditize their complement.

He was suffocated by the corporate aspect Meta I suspect.

I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.

So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D

Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took

Lecun introduced backprop for deep learning back in 1989 Hinton published about contrastive divergance in next token prediction in 2002 Alexnet was 2012 Word2vec was 2013 Seq2seq was 2014 AiAYN was 2017 UnicornAI was 2019 Instructgpt was 2022

This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait

If his ideas had real substance, we would have seen substantial results by now. He introduced I-JEPA in 2023, so almost three years ago at this point.

If he still hasn’t produced anything truly meaningful after all these years at Meta, when is that supposed to happen? Yann LeCun has been at Facebook/Meta since December 2013.

Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.

> If his ideas had real substance, we would have seen substantial results by now

This is naive. Like saying if backprop had any real substance, it would have had results within 10 years of its publication in 1989

> Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.

Again. Those resources are important. But one resource being ignored is time. Try baking a turkey at 300 for 4 hours veruss at 900 for 1 hour and see how edible each one is

Your take is brutal but spot on