Certainly AI editorialised. I wonder if this is because English isn’t their first language, and they are confidence compensating. I’ve worked with a lot of folks also from Philippines and the Tagalog/English mix leads to some confidence challenges sometimes.
You might be surprised…or you might not. I’ve found it’s a good barometer for whether you actually don’t like AI writing or you just don’t like bad AI writing.
1. This test has really zero to do with what we're talking about. Stylized fiction is a completely separate domain from non-fiction writing of personal anecdotes. There's effectively zero relation between them.
2. Picked the human 5 out of 5. Since it's pointless to take as a judge of preference due to 1), I took it as a test of "spot the AI", and clearly it was obvious to me in every instance.
3. Of course we just "don't like bad AI writing". "Good AI writing" would be unnoticeable. This is incredibly rare in the domain we're talking about.
Small, pithy quotes vs dozens of paragraphs are rather different things.
It does not surprise me in the least that a machine can produce excellent small quotes. Markov chains have been production some fantastic stuff for decades, for example, and they're about as complicated as an abacus. https://thedoomthatcametopuppet.tumblr.com/
It seems I chose AI 5 times out of 5. I'm not a native speaker, so I might have preferred a slightly more straightforward text.
On one side, I think this suffers a lot from selection bias: short AI snippets specifically chosen by humans for their quality and they do not necessarily reflect the average experience of AI text. On the other hand, AI generated text does not preclude human editing.
A few paragraphs isn't writing, it's a snippet. The shorter something is, the better AI will be at mimicking it, because underlying flaws are less likely to be made apparent.
Music is another great example of this. I enjoy techno/trance type stuff, but YouTube is becoming borderline unusable for this genre due to AI slop. You'd think AI would do a good job of producing tracks here since this genre is certainly somewhat formulaic. And about 2 minutes into a lengthy track I'd probably do relatively mediocrely at determining whether it was human or AI, but by about 10 minutes into a track it's often painfully obvious. I run this experiment regularly as I find myself having to skip the AI slop which YouTube seems obsessed with recommending anyhow.
Ironically AI is probably providing a boon to human DJs here, because actively seeking them out it is one of the only ways to escape YouTube's sloparithm.
I preferred the AI 4 out of 5 times. That's a little confronting. And judging by the amount of cope in the comments section, others found it the same. I guess it is a small test, but I think it successfully makes it's point.
I got 4/5 human. #3 - I chose AI, it was very close.
I noticed something-humans will use words precisely and loosely at the same time. AI will seem like it’s precise but a lot of the wording it uses can be cut or replaced by something else without losing much meaning.
At this point, I assume most LinkedIn users use AI to assist in generating posts anyway, so the distinction kinda becomes pointless. Nobody likes reading AI generated posts, and nobody ever really liked reading LinkedIn posts either.
I read the whole thing, but I was questioning whether this was heavily AI-assisted or just very linkedin-coded. For me the biggest AI indicators were "From “arcane” to professional", "The results: From the playmat to the professional world" and your "actually owning a language" example. I can't imagine anyone writing those sentences, even long-time linkedin users.
I really think that the HN guidelines need updating, so that we're directed to consider those comments the same way we do accusations of astroturfing:
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
It "degrades discussion" in exactly the same fashion.
For sure an AI write up
Certainly AI editorialised. I wonder if this is because English isn’t their first language, and they are confidence compensating. I’ve worked with a lot of folks also from Philippines and the Tagalog/English mix leads to some confidence challenges sometimes.
Recommend everyone take this test: https://www.nytimes.com/interactive/2026/03/09/business/ai-w...
You might be surprised…or you might not. I’ve found it’s a good barometer for whether you actually don’t like AI writing or you just don’t like bad AI writing.
1. This test has really zero to do with what we're talking about. Stylized fiction is a completely separate domain from non-fiction writing of personal anecdotes. There's effectively zero relation between them.
2. Picked the human 5 out of 5. Since it's pointless to take as a judge of preference due to 1), I took it as a test of "spot the AI", and clearly it was obvious to me in every instance.
3. Of course we just "don't like bad AI writing". "Good AI writing" would be unnoticeable. This is incredibly rare in the domain we're talking about.
Small, pithy quotes vs dozens of paragraphs are rather different things.
It does not surprise me in the least that a machine can produce excellent small quotes. Markov chains have been production some fantastic stuff for decades, for example, and they're about as complicated as an abacus. https://thedoomthatcametopuppet.tumblr.com/
It seems I chose AI 5 times out of 5. I'm not a native speaker, so I might have preferred a slightly more straightforward text.
On one side, I think this suffers a lot from selection bias: short AI snippets specifically chosen by humans for their quality and they do not necessarily reflect the average experience of AI text. On the other hand, AI generated text does not preclude human editing.
Spoilers:
Question 1 had such different styles. I preferred the style the AI was using, but that was purely a stylistic preference.
Question 3 was a toss-up. They both felt fine, and funny enough they both had a "not just X, it's Y" pattern.
Those were the only two where I clicked the AI version - for the other three, it was obvious which was AI.
A few paragraphs isn't writing, it's a snippet. The shorter something is, the better AI will be at mimicking it, because underlying flaws are less likely to be made apparent.
Music is another great example of this. I enjoy techno/trance type stuff, but YouTube is becoming borderline unusable for this genre due to AI slop. You'd think AI would do a good job of producing tracks here since this genre is certainly somewhat formulaic. And about 2 minutes into a lengthy track I'd probably do relatively mediocrely at determining whether it was human or AI, but by about 10 minutes into a track it's often painfully obvious. I run this experiment regularly as I find myself having to skip the AI slop which YouTube seems obsessed with recommending anyhow.
Ironically AI is probably providing a boon to human DJs here, because actively seeking them out it is one of the only ways to escape YouTube's sloparithm.
I preferred the AI 4 out of 5 times. That's a little confronting. And judging by the amount of cope in the comments section, others found it the same. I guess it is a small test, but I think it successfully makes it's point.
All of the fragments read like bad slop.
I successfully chose the least democratically awful slop if that's an indication of anything.
I got 4/5 human. #3 - I chose AI, it was very close.
I noticed something-humans will use words precisely and loosely at the same time. AI will seem like it’s precise but a lot of the wording it uses can be cut or replaced by something else without losing much meaning.
Two human editors. I'm one of them and I absolutely do not use AI tools when I edit.
If you're going off the use of emdashes and endashes, I've been using them for over 25 years.
You couldnt tell the difference between a LinkedIn writer and a up and AI, they are both comparably generic.
At this point, I assume most LinkedIn users use AI to assist in generating posts anyway, so the distinction kinda becomes pointless. Nobody likes reading AI generated posts, and nobody ever really liked reading LinkedIn posts either.
Suppose you saw the em dash in the first line and drew that conclusion
No. But I admit I stopped after these:
> actually “owning” a language
> I found my answer in the one thing I had loved for over a decade
> Following is a detailed, step-by-step breakdown of how I did just that
I read the whole thing, but I was questioning whether this was heavily AI-assisted or just very linkedin-coded. For me the biggest AI indicators were "From “arcane” to professional", "The results: From the playmat to the professional world" and your "actually owning a language" example. I can't imagine anyone writing those sentences, even long-time linkedin users.
I love this new future where every post has comments about whether AI was involved or not!
I really think that the HN guidelines need updating, so that we're directed to consider those comments the same way we do accusations of astroturfing:
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
It "degrades discussion" in exactly the same fashion.
Except "usually mistaken" doesn't apply here, since it's often true.
The position of "Degrades discussion" in that sentence implies that the accuracy of the claim has no bearing on that particular impact.
That's just in the short term. After a few years people will be complaining this posts sounds like it was written by meat.
You mean the meat communicates?
The meat is alive, actually. Sentient meat, if you can believe it.