"He (the author) did not answer our questions asking if he used an LLM to generate text for the book. However, he told us, “reliably determining whether content (or an issue) is AI generated remains a challenge, as even human-written text can appear ‘AI-like.’ This challenge is only expected to grow, as LLMs … continue to advance in fluency and sophistication.”

Lol, that answer sounds suspiciously much like LLM generated as well ..

It's true that "AI detection algorithms" are not particularly reliable.

It's also true that if you have fake CITATIONS in your works that such algorithms aren't necessary to know the work is trash - either it was written by AI or you knowingly faked your research and it doesn't really matter which.