Author here! This post was human written, LLM proofread, and edited a couple times as folks pointed out broken links and minor errors when it was posted to r/rust a few days ago. As someone mentioned lower in the thread, there's a form of what is sometimes called Bay Area Standard that both very online humans and LLMs have absorbed. I find it FASCINATING that we're in an era where we have to prove our humanity, and the downstream behaviours of things like killing em-dash use in response are interesting to watch in real time. I've made the same mistake, so it's honestly difficult to tell!
So tired of this sort of comment. LLMs are trained using (primarily, generally) online material. It sounds like online humans, in aggregate, plus or minus a bit of policy on the part of the model builders.
Email the mods about it rather than replying, subject “Accusation of AI in FP comment” or whatever. It’s a guidelines violation to make the accusation in a comment rather than to them by email, and they have tools to deal with it!
Nobody is making an accusation of an AI comment - people are pointing out that the article is at least partially AI generate, which does not go against any HN guidelines, and neither does complaining about those comments.
That's exactly the problem. It sounds like one aggregate person. It's quite unpleasant to read the same turns of phrase again and again and again, especially when it means that the author copped out of writing it themselves.
In fairness I think in this case they mostly did write it themselves.
Except nobody writes like the aggregate, hence why it's so jarring.
The closest actually human style to LLM writing is obnoxious marketing speak. So that also sucks.
So many people who are not great writers lean on LLMs to write, but aren't good enough to see how bad it is. They should be criticised for this. Either use them and be good enough to make it read as human, or just don't use them. No free lunch.
Author here! This post was human written, LLM proofread, and edited a couple times as folks pointed out broken links and minor errors when it was posted to r/rust a few days ago. As someone mentioned lower in the thread, there's a form of what is sometimes called Bay Area Standard that both very online humans and LLMs have absorbed. I find it FASCINATING that we're in an era where we have to prove our humanity, and the downstream behaviours of things like killing em-dash use in response are interesting to watch in real time. I've made the same mistake, so it's honestly difficult to tell!
tbh I'm not getting GPT-voice from this
I'm not either. If this was GPT-voice, I'd be happy. It's concise, technical, with good emphasis but no drama or AI tropes.
It's there in places ("The honest answer is...") but I think most of this is human written. They probably started with an AI draft I'd guess.
[dead]
[dead]
So tired of this sort of comment. LLMs are trained using (primarily, generally) online material. It sounds like online humans, in aggregate, plus or minus a bit of policy on the part of the model builders.
> So tired of this sort of comment.
Email the mods about it rather than replying, subject “Accusation of AI in FP comment” or whatever. It’s a guidelines violation to make the accusation in a comment rather than to them by email, and they have tools to deal with it!
Nobody is making an accusation of an AI comment - people are pointing out that the article is at least partially AI generate, which does not go against any HN guidelines, and neither does complaining about those comments.
> It sounds like online humans, in aggregate
That's exactly the problem. It sounds like one aggregate person. It's quite unpleasant to read the same turns of phrase again and again and again, especially when it means that the author copped out of writing it themselves.
In fairness I think in this case they mostly did write it themselves.
They write like the worst possible person. It's terrible and obnoxious, there is no reason to put up with it.
Except nobody writes like the aggregate, hence why it's so jarring.
The closest actually human style to LLM writing is obnoxious marketing speak. So that also sucks.
So many people who are not great writers lean on LLMs to write, but aren't good enough to see how bad it is. They should be criticised for this. Either use them and be good enough to make it read as human, or just don't use them. No free lunch.
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.