Is this post AI-written? The repeated lists with highlighted key points, the "it's not just [x], but [y]" and "no [a] just [b]" scream LLM to me. It would be good to know how much of this post and this project was human-built.
Is this post AI-written? The repeated lists with highlighted key points, the "it's not just [x], but [y]" and "no [a] just [b]" scream LLM to me. It would be good to know how much of this post and this project was human-built.
Have you noticed that your username is literally "its_notjack"?
I was on the fence about such an identification. The first "list with highlighted key points" seemed quite awkward to me and definitely raised suspicion (the overall list doesn't have quite the coherence I'd expect from someone who makes the conscious choice; and the formatting exactly matches the stereotype).
But if this is LLM content then it does seem like the LLMs are still improving. (I suppose the AI flavour could be from Grammarly's new features or something.)
Perhaps people have mimicked the style because LLMs have popularized it and clearly it serves some benefit to readers.
Perhaps LLMs have mimicked the style because authors have popularized it and clearly it serves some benefit to readers.
It's a cycle.
Life imitates art, even when that art is slop
> have popularized it
It's hated by everyone, why would people imitate it? You're inventing a rationale that either doesn't exist or would be stupider than the alternative. The obvious answer here it they just used an LLM.
> and clearly it serves some benefit to readers.
What?
I think that the style itself is very clear and has its advantages, it's hated only because it's from LLMs, which are not liked when used without judgement (which is often the case).
So, someone who falls on the side of not completely hating LLMs for everything (which is most people), could easily copy the style by accident.
> It's hated by everyone, why would people imitate it?
It could be involuntary. People often adopt the verbal tics of the content they read and the people they talk with.
Then why does every vibe-coded "Show HN" app have it in README.md? Surely authors would edit it out if it was true that everyone hates it.
Maybe vibe-coding Show HN apps is correlated with low effort and bad taste.
> "The key insight is..."
This was either written by Claude or someone who uses Claude too much.
I wish they could be upfront about it.
I love the style it was written in. I felt a bit like reading a detective novel, exploring all terrible things that happened and waiting for a plot twist and hero comming in and saving the day.
Yes. It appears that way
you know why LLMs repeat those patterns so much? because that's how real humans speak
Real humans don't speak in LinkedIn Standard English
Real humans write like that though. And LLMs are trained on text not speech. Maybe they should get trained on movie subtitles, but then movie characters also don't speak like real humans.
"LinkedIn Standard English" is just the overly-enthusiastic marketing speak that all the wannabe CEOs/VCs used to spout. LLMs had to learn it somewhere
Humans don't, but cocaine does speak "LinkedIn Standard English".
> LinkedIn Standard English
We need a dictionary like this :D
The old Unsuck-it page comes pretty close. I’m not a huge fan of the newer page though. https://www.unsuck-it.com/classics
LinkedIn and its robotic tone existed long before generative AI.
Know what's more annoying than AI posts? Seeing accusations of AI slop for every. last. god. damned. thing.
Yes that's the point. LLMs pretty much speak LinkedInglish. That existed before LLMs, but only on LinkedIn.
So if you see LinkedInglish on LinkedIn, it may or may not be an LLM. Outside of LinkedIn... probably an LLM.
It is curious why LLMs love talking in LinkedInglish so much. I have no idea what the answer to that is but they do.
It is at least thematically appropriate, of course a corporate-built language machine speaks like LinkedIn.
The actual mechanism, I have no clue.
[flagged]
I'm so fucking tired of this
I last developed for windows in the late 90s.
I came back around 2017*, expecting the same nice experience I had with VB3 to 6.
What a punch in the face it was...
I honestly cannot fathom anyone developing natively for windows (or even OSX) at this day and age.
Anything will be a webapp or a rust+egui multi-plataform developed on linux, or nothing. It's already enough the amount of self-hate required for android/ios.
* not sure the exact date. It was right in the middle of the WPF crap being forced as "the new default".*
And yet without Proton there are no Linux games.
> Is this post AI-written?
What if it was?
What if it wasn't?
What if you never find out definitely?
Do you wonder that about all content?
If so, doesn't that get exhausting?
Yeah, it does. Congratulations, you figured out why the future is going to be fucking awful.
"What if you can't tell the difference?" Yeah, what if it becomes impossible to spot who's a lazy faker who outsourced their thinking? Doesn't that sound great?!
What's exhausting is getting through a ten-paragraph article and realising there was only two paragraphs of actual content, then having to wade back through it to figure out which parts came from the prompt, and which parts were entirely made up by the automated sawdust injector.
That's not an AI problem, it's a a general blog post problem. Humans inject their own sawdust all the time. AI, however, can write concisely if you just tell it to. Perhaps you should call this stuff "slop" without the AI and then it doesn't matter who/what wrote it because it's still slop regardless.
I completely agree with your parent that it's tedious seeing this "fake and gay" problem everywhere and wonder what an unwinnable struggle it must be for the people who feel they have to work out if everything they read was AI written or not.
It used to require some real elbow grease to write blogspam, now it's much easier.
I hardly ever go through a post fisking it for AI tells, they leap out at me now whether I want them to or not. As the density of them increases my odds of closing the tab approach one.
It's not a pleasant time to read Show HNs but it just seems to be what's happening now.
> and wonder what an unwinnable struggle it must be for the people who feel they have to work out if everything they read was AI written or not
Exactly!
It never used to be a general blog post problem. It was a problem with the kinds of blogs I'd never read to begin with, but "look, I made a thing!" was generally worth reading. Now, I can't even rely on "look, I made a thing!" blog posts to accurately describe the author's understanding of the thing they made.
I analyzed the test using Pangram, which is apparently reliable, it say "Fully human Written" without ambiguity.[1]
I personally like the content and the style of the article. I never managed to accept going through the pain to install and use Visual Studio and all these absurd procedures they impose to their users.
[1] https://www.pangram.com/history/300b4af2-cd58-4767-aced-c4d2...
This honestly just tells me that Panagram is hot garbage
These days I'm always wondering whether what I'm reading is LLM-slop or the actual writing of a person who contracted AI-isms by spending hours a day talking to them.