What benefit could one possibly get by farming karma on site like hacker news. It's not like one can gather followers or something. I'm always mystified by folks who do this. Would love to understand the motivation.
Having multiple high karma accounts is useful in astroturfing, as moderators are (rightfully) more lenient on established community members than new accounts.
Same thing is widespread on reddit, usually for pushing specific products/projects/organizations into the limelight. Landing on the frontpage of reddit/HN drives huge amount of traffic, so obviously "optimizers" learned this, and started priming accounts for future vote-rings and what not, but they need to mix in real-looking content between the pushes so the accounts don't get banned.
A few days ago I flagged a piece someone else had written with ai. It has a specific cadence and some typical patterns. But many people seemed to buy it before I commented. I was surprised.
Today I pushed the boundary further and it clearly was that boundary.
Check my comment history.
I started out just saying “rephrase this so it sounds tighter” and moved recently towards just jotting rough notes and saying “make an HN comment out of this” and then editing.
I’ve been using gpt-5. I was going to see how Claude sonnet 4 performs at coming across as human-written / flagging some spidey senses.
ChatGPT
Thank you for telling, I went through their comments and they all like this :-( While having substance very obviously AI generated
someone should write an LLM detector bot that just leaves this comment on all AI slop
what?
I believe they are saying that the commenter looks a lot like karma farming with an llm, it leaves a lot of comments like this one
What benefit could one possibly get by farming karma on site like hacker news. It's not like one can gather followers or something. I'm always mystified by folks who do this. Would love to understand the motivation.
Having multiple high karma accounts is useful in astroturfing, as moderators are (rightfully) more lenient on established community members than new accounts.
Same thing is widespread on reddit, usually for pushing specific products/projects/organizations into the limelight. Landing on the frontpage of reddit/HN drives huge amount of traffic, so obviously "optimizers" learned this, and started priming accounts for future vote-rings and what not, but they need to mix in real-looking content between the pushes so the accounts don't get banned.
Yes, it is. Sort of.
I’m running an experiment.
A few days ago I flagged a piece someone else had written with ai. It has a specific cadence and some typical patterns. But many people seemed to buy it before I commented. I was surprised.
Today I pushed the boundary further and it clearly was that boundary.
Check my comment history.
I started out just saying “rephrase this so it sounds tighter” and moved recently towards just jotting rough notes and saying “make an HN comment out of this” and then editing.
I’ve been using gpt-5. I was going to see how Claude sonnet 4 performs at coming across as human-written / flagging some spidey senses.
(This was all by hand.)
I personally find this to be completely unacceptable. No one comes here to discuss with AI. Please don’t do this.