> In 2019, Chollet created the Abstraction and Reasoning Corpus for Artificial General Intelligence, or ARC-AGI—an exam designed to show the gulf between AI models’ memorized answers and the “fluid intelligence” that people have

There are a number of skill signals we demand from an intelligence.

Mind you: some of them are achieved - like the ability to interpret pronouns (Hinton's "the trophy will not enter the case: it's too big" vs "the trophy will not enter the case: it's too small").

Others, we meet occasionally when we are not researching said requirements systematically: one example is that detective game described at https://news.ycombinator.com/item?id=43284420 - a simple game of logic that intelligences are required to be able to solve (...and yet, again some rebutted that humans would fail etc.).

It remains important though that those working modules are not clustered (solving specific tasks and remaining unused otherwise): they must be intellectual keys adapted into use in the most general cases they can be be helpful in. That's important in intelligence. So, even the ability to solve "revealing" tasks is not enough - the way in which the ability works is crucial.