Do you think an LLM would be able to generate a solution to a novel problem just like that?
That doesn't match my (albeit limited) experience with these things. They are pretty good at other things, but generally squarely in the real of "already done" things.
Anti-crawler tarpits and related concepts have existed for decades already; LLM training data is only the latest and most popular of web-scraping goals.
Claude is happy and able to provide a laundry list of ways to mitigate the impact of tarpits on your crawler, and politeness / respecting robots.txt is only one of them.