I think it's fascinating that his impossible benchmark got defeated, but because the Keras guy doesn't like LLMs, it is possible to mishear algorithmic distaste as saying people shipping this are "lazy" and "hype maxing."
I think it's fascinating that his impossible benchmark got defeated, but because the Keras guy doesn't like LLMs, it is possible to mishear algorithmic distaste as saying people shipping this are "lazy" and "hype maxing."
Francois never said he dislike LLMs. In fact, he said he expected them to be part of the solution to ARC.
I don’t know where this persistent myth comes from, but it has to go.
> part of the solution to ARC. I don’t know where this persistent myth comes from,
Part of, explicitly, not the, quite 100% explicitly. The TL;DR is "LLMs can't do it alone, program synthesis leveraging LLMs is my bet". Not "Maybe not LLMs but they'll certainly help us get there!", quite the opposite! Hence: well, TFA. And the intellectually lazy quote we are explicitly discussing. And anything Chollet has said on the subject. [^1]
[^1]"LLMs won’t lead to AGI - $1,000,000 Prize to find true solution" - https://www.dwarkesh.com/p/francois-chollet - 1.5 hours with the gent
Arc agi 1 that "got defeated" was published even before first mainstream llms and still stood the test of time
> "got defeated"
1.5 hours with Chollet on "LLMs won’t lead to AGI - $1,000,000 Prize to find true solution" -
Published June 2024, and by December, well...we can all agree there's an ARC AGI 2 now.
https://www.dwarkesh.com/p/francois-chollet