> Solving hard math problems requires understanding the structure of complex mathematical reasoning. No animal is known to be capable of that.

Except, it doesn't. Maybe some math problems do -- or maybe all of them do, when the text isn't in the training set -- but it turns out that most problems can be solved by a machine that regurgitates text, randomly, from all the math problems ever written down.

One of the ways that this debate ends in a boring cul-de-sac is that people leap to conclusions about the meaning of the challenges that they're using to define intelligence. "The problem has only been solved by humans before", they exclaim, "therefore, the solution of the problem by machine is a demonstration of human intelligence!"

We know from first principles what transformer architectures are doing. If the problem can be solved within the constraints of that simple architecture, then by definition, the problem is insufficient to define the limits of capability of a more complex system. It's very tempting to instead conclude that the system is demonstrating mysterious voodoo emergent behavior, but that's a bit like concluding that the magician really did saw the girl in half.