There's a whole industry of "illusions" humans fail for: optical, word plays (including large parts of comedy), the Penn & Teller type, etc. Yet no one claims these are indicators that humans lack some critical capability.

Surface of "illusions" for LLMs is very different from our own, and it's very jagged: change a few words in the above prompt and you get very different results. Note that human illusions are very jagged too, especially in the optical and auditory domains.

No good reason to think "our human illusions" are fine, but "their AI illusions" make them useless. It's all about how we organize the workflows around these limitations.

> No good reason to think "our human illusions" are fine, but "their AI illusions" make them useless.

I was about to argue that human illusions are fine because humans will learn the mistakes after being corrected.

But then I remember what online discussions over Monty Hall problem look like...

Exactly! I now feel bad for not thinking of that example, thank you.