I find it odd that the post above is downvoted to grey, feels like some sort of latent war of viewpoints going on, like below some other AI posts. (Although these misvotes are usually fixed when the US wakes up.)

The point above is valid. I'd like to deconstruct the concept of intelligence even more. What humans are able to do is a relatively artificial collection of skills a physical and social organism needs. The so highly valued intelligence around math etc. is a corner case of those abilities.

There's no reason to think that human mathematical intelligence is unique by its structure, an isolated well-defined skill. Artificial systems are likely to be able to do much more, maybe not exactly the same peak ability, but adjacent ones, many of which will be superhuman and augmentative to what humans do. This will likely include "new math" in some sense too.

What everybody is looking for is imagination and invention. Current AI systems can give best guess statistical answer from dataset the've been fed. It is always compression.

The problem and what most people intuitively understand is that this compression is not enough. There is something more going on because people can come up with novel ideas/solutions and whats more important they can judge and figure out if the solution will work. So even if the core of the idea is “compressed” or “mixed” from past knowledge there is some other process going on that leads to the important part of invention-progress.

That is why people hate the term AI because it is just partial capability of “inteligence” or it might even be complete illusion of inteligence that is nowhere close what people would expect.

> Current AI systems can give best guess statistical answer from dataset the've been fed.

What about reinforcement learning? RL models don't train on an existing dataset, they try their own solutions and learn from feedback.

RL models can definitely "invent" new things. Here's an example where they design novel molecules that bind with a protein: https://academic.oup.com/bioinformatics/article/39/4/btad157...

Finding variations in constrained haystack with measurable defined results is what machine learning has always been good at. Tracing most efficient Trackmania route is impressive and the resulting route might be original as in human would never come up with it. But is it actually novel in creative, critical way? Isn't it simply computational brute force? How big that force would have to be in physical or less constrained world?

> Current AI systems can give best guess statistical answer from dataset the've been fed.

Counterpoint: ChatGPT came up with the new idiom "The confetti has left the cannon"