Agreed. Nothing about this post really stood out as AI. It didn't raise a single flag for me.
I think calling something AI generated is just a lazy way of dismissing stuff nowadays.
Agreed. Nothing about this post really stood out as AI. It didn't raise a single flag for me.
I think calling something AI generated is just a lazy way of dismissing stuff nowadays.
This paragraph under ‘Trust Laundering’ is when it hit my AI writing trigger threshold:
> This is the circular citation pattern, and it’s one of the most under discussed attacks on the “retrieval augmented generation” trust model. It doesn’t require compromising Wikipedia’s infrastructure with l33t hacker skills. It doesn’t require social engineering an editor. You just simply write the source yourself, cite yourself on Wikipedia, and let the trust flow downstream. Easy peasy!
“It doesn’t X. It doesn’t Y. You just Z. Conclusion”
Once I saw that some other elements stood out too.
There’s a set of bullet points under ‘Thae Approach’ where each bullet starts with a bolded phrase: “one domain”, “one press release”, “one Wikipedia edit”, followed by a bolded sentence “The whole thing took maybe about twenty minutes”.
The emphasis here on irrelevant quantifiable optimizations - who cares that it only needs one of each of three things and it took under twenty minutes? - with unnecessary faux-profundity is a strong AI tell.
Add to that that the writer talks in the article about using AI generation to produce the content for the poisoning site, the suggestion that he used it to write up a blog post about this is hardly an implausible suggestion.
If this truly set off zero flags for you then you're probably just not very attuned to LLM writing style. I've noticed that most people are not.
I posted a bunch of specifics in a reply to the GP since I was quite annoyed with being accused of "a lazy way of dismissing stuff". It's nothing of the sort. I am a very good reader and I have read a lot of LLM writing and a lot of human writing.