There is research showing the contrary that is far more convincing:

> Our experiments show that annotators who frequently use LLMs for writing tasks excel at detecting AI-generated text, even without any specialized training or feedback. In fact, the majority vote among five such “expert” annotators misclassifies only 1 of 300 articles, significantly outperforming most commercial and open-source detectors we evaluated even in the presence of evasion tactics like paraphrasing and humanization.

https://arxiv.org/html/2501.15654v2

Great find, I've submitted this preprint as a standalone item: https://news.ycombinator.com/item?id=47678270