Another idea I had with this concept is to make an LLM-proof captcha. Maybe humans can detect the characters in the 'motion' itself, which could be unique to us?

- The captcha would be generated like this on a headless browser, and recorded as a video, which is then served to the user.

- We can make the background also move in random directions, to prevent just detecting which pixels are changing and drawing an outline.

- I tried also having the text itself move (bounce like the DVD logo). Somehow makes it even more readable.

I definitely know nothing about how LLMs interpret video, or optics, so please let me know if this is dumb.

As if captchas aren't painful enough for visually impaired users...

[deleted]

I don't think we need more capable people thinking of silly captchas.

Why? stopping LLM crawlers is a need now. We've seen more capable people working for undergrad dropouts.

Take N screenshots, XOR them pairwise, OR the results, then perform normal OCR.

Yes but this is prohibitively expensive for a large bot network to employ.

Wasn't that the whole point of Anubis?