The legal implications would be different vs scraping publicly available content.

Is there a case that actually says this? Why would whether something is fair use depend on that? For that matter, how would they even show that a given AI model was trained on something from a recursive crawler rather than the same articles added to the training data after being downloaded by hand?

There was a similar case where a web scraper was bypassing prevention mechanisms on linked in

https://en.wikipedia.org/wiki/HiQ_Labs_v._LinkedIn

That case seems to imply the opposite?