> the law allowed the company to train A.I. technologies using the books because this transformed them into something new.

Unless, of course, the transformation malfunctioned and you got the good old verbatim source, with many of examples compiled in similar lawsuits

This notably wasn't one of the allegations levied against Anthropic, as Claude was accompanied by software that filtered any infringing outputs. From the relevant opinion finding Anthropic's use of the books to be fair use:

> When each LLM was put into a public-facing version of Claude, it was complemented by other software that filtered user inputs to the LLM and filtered outputs from the LLM back to the user. As a result, Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service.

(from Bartz v. Anthropic in the Northern District of California)