Unless you've discovered the secret sauce, LLM comments are very obvious. Even Altman revealed that they focused on coding at the expense of writing.
Unless you've discovered the secret sauce, LLM comments are very obvious. Even Altman revealed that they focused on coding at the expense of writing.
The obvious ones are the ones you notice
LLMs are not good at writing. If they were we would have entire libraries of new, amazing literature.
Exactly, they aren't good at creating new material. But many discussions in comment section are simply regurgitations of existing material, which they are good at rearranging. New novel discussions in places like this are actually a very rare thing, as many comment sections are simply people who already know informing those who don't. I'm doing that right now, funnily enough.
No, they aren't even good at rearranging existing material. They produce bad writing that only superficially looks good in a lowest-common-denominator sense, and falls apart under any close examination. Everything is wrong with it, from the sentence structure to the rhetorical forms to the substance. AI 'writing' is a loose collection of cheap tricks that score well on A/B.
Neither are most humans
Agreed, some humans are good writers, and no LLMs are good writers.
This is rather moving the goalposts from "plausibly human comment" to "meaningful literature", I think
No. I'm drawing it out to its logical conclusion.
It’s poor logic, a non sequitur. An absurd reduction. By your argument anyone who hasn’t written a great literary work is a poor writer, and would be bad at writing online comments.
LLMs aren’t lacking in the sort of writing skills that make for superficially good content. They know grammar, they know rhetoric, and they know their audience. You can’t tell them from a human on their writing skills. Where they tend to fall down is their logic and reasoning skills, and unfortunately it seems you can’t use that to distinguish them from the average online opinionator either.
No, that is a mischaracterization of what I wrote. They are great writers if you enjoy formulaic writing.
With the current batch of SOTA models, it is not hard to prompt a model to pass the sniff test on social media forums. If you don't believe me, try it.
All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.
I have worked with LLMs for a couple years at a very non-technical level and it was not that difficult to give it proper prompting and reference material.
If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot
[flagged]
People that like to fancy themselves as good llm content detectors just end up accusing everything they don't like as llm content.
The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.
The main thing I suspect of being LLM written is the sort of LinkedIn style: very short sentences, overly focused on sort of… making an impact on the user. But that’s also how a certain type of bad human writer writes. So in the end, I’m not sure I know if anything in particular was written by an LLM.
I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.
It’s the distilled mediocrity of the statements. Never venturing beyond a 10% margin of what you would get if you sampled the opinions of 1,000 people who underwent jury selection by west coast liberals.
A mere opinion is not mental illness.
Was that written by an LLM? It isn't that it's a mere opinion, it's that when every word out there has to be scrutinized for the possibility that an AI output it instead of a human intelligence that it gets pathological. Am I an LLM with the right prompts set up to respond this way? I mean, I know I'm not, but everyone else out there is just going to have to trust me that I'm not.
I wasn't suggesting you have a mental illness for having an opinion.
More, commenting that just as bad as generated content if not worse is every thread where the top comment is an accusation and ensuing witch hunt.
So, no, having an opinion is not a mental illness. Feeling compelled to call it out and discuss it on everything one reads may just be.
The threads that have the top comment saying "this is AI slop" are nearly always about an article that is obvious AI slop.
Threads that aren't - like this one - don't.
If you need to tell yourself that in order to cope that's fine with me.
Which part do you disagree with?
I’m thinking that I may actually prefer undetectable AI slop to human comments like that. I do agree with your upthread comments.