So tired of this sort of comment. LLMs are trained using (primarily, generally) online material. It sounds like online humans, in aggregate, plus or minus a bit of policy on the part of the model builders.
So tired of this sort of comment. LLMs are trained using (primarily, generally) online material. It sounds like online humans, in aggregate, plus or minus a bit of policy on the part of the model builders.
> So tired of this sort of comment.
Email the mods about it rather than replying, subject “Accusation of AI in FP comment” or whatever. It’s a guidelines violation to make the accusation in a comment rather than to them by email, and they have tools to deal with it!
Nobody is making an accusation of an AI comment - people are pointing out that the article is at least partially AI generate, which does not go against any HN guidelines, and neither does complaining about those comments.
> It sounds like online humans, in aggregate
That's exactly the problem. It sounds like one aggregate person. It's quite unpleasant to read the same turns of phrase again and again and again, especially when it means that the author copped out of writing it themselves.
In fairness I think in this case they mostly did write it themselves.
They write like the worst possible person. It's terrible and obnoxious, there is no reason to put up with it.
Except nobody writes like the aggregate, hence why it's so jarring.
The closest actually human style to LLM writing is obnoxious marketing speak. So that also sucks.
So many people who are not great writers lean on LLMs to write, but aren't good enough to see how bad it is. They should be criticised for this. Either use them and be good enough to make it read as human, or just don't use them. No free lunch.