The best way I can describe it is chatgpt has an urgent salesman like quality to its text. It adds words and sentences that have no informational content but instead are used to increase the emotional weight of the text.
Multiple "it's not X, it's Y" and weirdly heavy use of dashes (it looks like the person who posted this did a find-and-replace to turn em-dashes into hyphens to try to mask that).
That said, I didn't find this one too bad. I could be wrong but it feels to me like the author had already written this out in their own words and then had the AI do a rewrite from that.
I personally became suspicious about this article being written with the help of LLMs when I read, "Day after day. Week after week." This is pretty far down in the article, but it felt weird to read this because it was near the end of that paragraph that its in. It felt repetitive, and then the paragraph right after that has the classic LLM pattern where they write, "it's not just X, it's Y."
This is what I noticed before deciding it seemed too much like AI to be worth reading more:
"Well, here's the thing not enough people talk about: we're giving these tools god-mode permissions. Tools built by people we've never met. People we have zero way to vet. And our AI assistants? We just... trust them. Completely.
[...]
On paper, this package looked perfect. The developer? Software engineer from Paris, using his real name, GitHub profile packed with legitimate projects. This wasn't some shady anonymous account with an anime avatar. This was a real person with a real reputation, someone you'd probably grab coffee with at a conference.
[...]
One single line. And boom - every email now has an unwanted passenger.
[...]
Here's the thing - there's a completely legitimate GitHub repo with the same name"
"Here's the thing", dashes - clearly the operator was aware how much of a giveaway em dashes are and substituted them but the way they're used here still feels characteristic - and this pattern where they say something with a question mark? And then elaborate on it. Also just intangibles like the way the sentences were paced. I wouldn't bet my life on it, but it felt too much like slop to pay attention to.
I genuinely take the effort to write em dashes quite often, certainly in formal documents or publications. So for me that's not a tell-tale sign of AI usage. Your analysis of the pacing of the article on the other hand — spot on.
The best way I can describe it is chatgpt has an urgent salesman like quality to its text. It adds words and sentences that have no informational content but instead are used to increase the emotional weight of the text.
Multiple "it's not X, it's Y" and weirdly heavy use of dashes (it looks like the person who posted this did a find-and-replace to turn em-dashes into hyphens to try to mask that).
That said, I didn't find this one too bad. I could be wrong but it feels to me like the author had already written this out in their own words and then had the AI do a rewrite from that.
I guess you're not hanging around younger people who have made this part of normal speech?
I personally became suspicious about this article being written with the help of LLMs when I read, "Day after day. Week after week." This is pretty far down in the article, but it felt weird to read this because it was near the end of that paragraph that its in. It felt repetitive, and then the paragraph right after that has the classic LLM pattern where they write, "it's not just X, it's Y."
This is what I noticed before deciding it seemed too much like AI to be worth reading more:
"Well, here's the thing not enough people talk about: we're giving these tools god-mode permissions. Tools built by people we've never met. People we have zero way to vet. And our AI assistants? We just... trust them. Completely.
[...]
On paper, this package looked perfect. The developer? Software engineer from Paris, using his real name, GitHub profile packed with legitimate projects. This wasn't some shady anonymous account with an anime avatar. This was a real person with a real reputation, someone you'd probably grab coffee with at a conference.
[...]
One single line. And boom - every email now has an unwanted passenger.
[...]
Here's the thing - there's a completely legitimate GitHub repo with the same name"
"Here's the thing", dashes - clearly the operator was aware how much of a giveaway em dashes are and substituted them but the way they're used here still feels characteristic - and this pattern where they say something with a question mark? And then elaborate on it. Also just intangibles like the way the sentences were paced. I wouldn't bet my life on it, but it felt too much like slop to pay attention to.
I genuinely take the effort to write em dashes quite often, certainly in formal documents or publications. So for me that's not a tell-tale sign of AI usage. Your analysis of the pacing of the article on the other hand — spot on.
They probably saw an em dash, an emoji, or a word that is not in their vocabulary