Ironically, if you actually know what you’re doing with an LLM, getting a separate process to check the quotations are accurate isn’t even that hard. Not 100% foolproof, because LLM, but way better than the current process of asking ChatGPT to write something for you and then never reading it before publication.

The wrinkle in this case is the author blocked AI bots from their site (doesn't seem to be a mere robots.txt exclusion from what I can tell), so if any such bot were trying to do this it may have not been able to read the page to verify, so instead made up the quotes.

This is what the author actually speculated may have occurred with Ars. Clearly something was lacking in the editorial process though that such things weren't human verified either way.