One pattern I've noticed recently is sort of formulaic comments that look okish on their own, maybe a bit abstract/vague/bland, and not taking a particular side on good/bad in the way people like to do, but really obviously AI when you look at the account history and they're all the same formula:

>this is [summary]

>not just x, it's y

>punchy ending, maybe question

Once you know it's AI it's very obvious they told it to use normal dashes instead of em dashes, type in lowercase, etc., but it's still weirdly formal and formulaic.

For example from https://news.ycombinator.com/threads?id=snowhale

"this is the underreported second-order risk. Micron, Samsung, SK Hynix all allocated HBM capacity based on hyperscaler capex projections. NAND fabs are similarly committed. a 57% reduction in projected OpenAI spend (.4T -> B) doesn't just affect NVIDIA orders -- it ripples into the memory suppliers who shifted capacity to HBM and away from commodity DRAM/NAND. if multiple hyperscalers revise down simultaneously you get a situation similar to the 2019 crypto ASIC overhang: companies tooled up for demand that evaporated. not predicting that, but the purchasing commitments question is real."

The user [1] you've mentioned has 160 points being a poster of total four bland messages. This goes against a normal statistical distribution. And this gives away why they do it: the long-term aim is to cultivate voting rings to influence the narratives and rankings in the future. For now, this is only my theory but it may be a real monetization strategy for them.

[1] https://news.ycombinator.com/threads?id=snowhale

I gather that you do not have showdead on. The account has a lot more posts than that, but most were flagged.

EDIT to correct: most are not [flagged], but [dead] anyway, so probably manual moderator action or an automated anti-bot measure.

I'd be interested to know why those comments were flagged actually. They don't scream AI and no-one has replied calling them out as AI, etc. But the vast majority are dead.

> four bland messages

That's why. Boring, bland, etc. That account's M.O. is basically "write a paragraph that says nothing." Fwiw, I do think AI can be indistinguishable from dumb, boring people, but usually those kinds of people won't be on HN.

Oh we are on HN, just usually don't comment.

The account was immediately shadowbanned after re-awakening from a long period of inactivity.

I agree it doesn't seem obviously AI. The early comments are all in the same writing style and smell human. Lots of strong opinions e.g.

"logged in after years away and had basically the same experience. the feed is just AI slop and engagement bait now, none of it from people I actually followed." [about Facebook]

HN has got a big problem with silently shadowbanning accounts for no obvious reason. Whether it's an attempt to fight bots gone wrong or something else isn't clear. By the very nature of shadowbanning there is no feedback loop that can correct mistake.

Pretty sure they weren't shadowbanned immediately, since people replied to some of those [dead] comments. Most likely the shadowban was applied retroactively after posting the more obviously generated stuff.

>And this gives away why they do it: the long-term aim is to cultivate voting rings to influence the narratives and rankings in the future. For now, this is only my theory but it may be a real monetization strategy for them.

I don't think it's clear at all why people do this. I suspect a large amount of it, at least on a site like HN, is just hapless morons who think it's "cool".

"is real" is another big red flag, go search this in comments. There appear to be at least three accounts posting direct LLM outputs.

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

The only practical purpose I can think of for farming karma on HN with an LLM would be to amass an army of medium-low karma accounts over time and use the botnet for targeted astroturfing or other mass-manipulation. Eek.

This correlation you are observing is real.

I am real and this is my art

Your confirmation of the correlation is the first real result.

I've certainly noticed the summary posts.

I'll actually post a comment or question and I'll get a reply with a bit of a paragraph of what feels like a very "off" (not 'wrong' but strangely vague) summary of the topic ... and then maybe an observation or pointed agenda to push, but almost strangely disconnected from what I said.

One of the challenges is that yeah regular users don't get each other's meaning / don't read well as it is / language barriers. Yet the volume of posts I see where the other user REALLY isn't responding to the other person seems awfully high these days.

AI generated content routinely takes sides. Their pretense of neutrality is no deeper than a typical homo sapien's. This is necessarily so in an entity that derives its values from a set of weights that distill human values. Maybe reasoning AI can overcome that some day, but to me that sounds like an enormous problem that may never be solved. If AI doesn't take sides like people do they still take sides in their own way. That only becomes obscure to the extent that their value judgments conflict with ours, and they are very good at aligning with the zeitgeist values, so can hide their biases better than we can.

I wonder if it is neural networks that are inherently biased, but in blind spots, and that applies to both natural and artificial ones. It may be that to approximate neutrality we or our machines have to leave behind the form of intelligence that depends on intrinsically biased weights and instead depend on logically deriving all values from first principles. I have low confidence that AI's can accomplish that any time soon, and zero confidence that natural intelligence can. And it's difficult to see how first principles regarding human values can be neutral.

I'm also skeptical that succeeding at becoming unbiased is a solution, and that while neutrality may be an epistemic advance, it also degrades social cohesion, and that neutrality looks like rationality, but bias may be Chesterson's Fence and we should be very careful about tearing it down. Maybe it's a blessing that we can't.

It's wierd because the barrier to not have that in is so low, you can just tack on 'talk like me not AI, dont use em dashes, don't use formulaic structures, be concice' and itll get rid of half of those signals.

This is how you get precious takes like this one:

https://news.ycombinator.com/item?id=45322362

> First impression: I need to dive into this hackernews reply mockup thing thoroughly without any fluff or self-promotion. My persona should be ..., energetic with health/tech insights but casual and relatable.

> Looking at the constraints: short, punchy between 50-80 characters total—probably multiple one-sentence paragraphs here to fit that brevity while keeping it engaging.

> User specified avoiding "Hey" or "absolutely."

Lots more in its other comments (you need [showdead] on).

I don't understand why someone would go through the effort to prompt that when the comments it suggested are total garbage, and it seems like would take similar effort to produce a low quality human written comment.

If I had to guess, it's probably an attempt to automate karma farming over time to make an account look legit later on.

Don't give these subnormals any ideas!

conspiracy: the people behind these bots intentionally run very obvious bots to distract everyone from the less-obvious bots

It's not just clever—it's devious!

[deleted]

What motivation is there to use AI to astroturf (if that's what this is) like this?

Is it ideological?

Is it product marketing in those relevant threads where someone is showcasing?

Or is it pure technical testing, playing around?

In some cases, it's probably to establish aged accounts that are more trusted by users and spam algorithms. There's a market for old Reddit accounts, for example.

Yup, reddit is awash in established accounts that suddenly start spamming. Whole pools of them working to the same goal at times.

I receive multiple offers a year to participate in spam rings with the 20 year old high-karma reddit account. I usually just ignore them or report them. I could be making so much money /s

So far it hasn't happed here, but we'll see!

Yep. Like I said elsewhere on the thread, some of them already have enough karma to downvote.

Interesting.

Incidentally, how much do they pay for a HN account that is a few years old and accumulated a few thousand Internet points?

Asking for a friend.

They are very valuable. Just a few of them can put a link on the HN front page. Upvote a certain viewpoint. Or bury any post they want gone.

I went through a phase where I milled responses through grinding plates of LLMs. Whether my reasons are shared with others remains unknown.

My relationship with writing, while improved, has been a difficult one. Part of me has always felt that there was a gap in my writing education. The choices other writers seem to make intuitively - sentence structure, word choice, and expression of ideas - do not come naturally to me. It feels like everyone else received the instructions and I missed that lesson.

The result was a sense of unequal skill. Not because my ideas are any less deserving, but because my ability to articulate them doesn't do them justice. The conceit is that, "If I was able to write better, more people would agree with me." It's entirely based on ego and fear of rejection.

Eventually, I learned that no matter how polished my writing is, even restructured by LLMs, it won't give me what I craved. At that moment, the separation of writer and words widened to a point where it wasn't about me anymore and more about them, the readers. This distance made all the difference and now I write with my own voice however awkward that may be.

Did you use AI for this answer?

Because it looks completely adequate for me. Maybe you're not the bad writer you think you are.

No, I wrote it by hand on my phone. Thanks! Appreciate the feedback and outside perspective :)

This was super relatable. Thank you for sharing. You're definitely not alone in this.

Same as Reddit. Accumulate enough points via posting shallow and uninteresting—yet popular—dialogue to earn down voting and flagging abilities, which can be used (via automation) to manipulate discussions and suppress viewpoints.

Slashdot's system was superior because mod points were finite and randomly dispensed. This entropy discouraged abuse by design—as opposed to making it a key feature of the site.

It's the Achilles' heel of Reddit and every site that attempts to emulate it.

Critically, Slashdot also had a meta-moderation system, where users were asked to judge moderation activity to confirm whether it was sensible, fair, and so on. I'd like to believe that system played a vital role in stopping abuse of the moderation system. It was way ahead of its time.

I've been advocating for a while now that HN could use meta-moderation at least on flagging activity, so it can stop giving flagging powers to users who are using it for reasons other than flagging rulebreaking.

Reddit awards one karma for a comment if it doesn't get downvoted. I noticed the other day that I got a pretty random and only tangentially relevant comment on a one month old post I made. I checked out the user, and they were only commenting on old posts to slowly accumulate karma. Only the poster will be notified about such a comment, and as long as it is made to be made of platitudes, most people will not bother downvoting.

Scams (romance scams or convincing people to run some code on their machine), influence operations by an intelligence agency, or advertising a product.

[deleted]

The same case that ruins most good things, greed. The tragedy of the commons does not discriminate

tirreno guy here, we develop an open-source fraud prevention / security platform (1).

Sometimes there is no clear explanation for fake account registration. Perhaps they were registered to be actively used in the future, as most fraud prevention techniques target new account registration and therefore old, aged accounts won't raise suspicion.

Slightly off-topic, but there are relatively new `services` that offer native brand mentions in reddit comments. Perhaps this will soon be available for HN as well, and warming up accounts might be needed for this purpose.

1. https://github.com/tirrenotechnologies/tirreno

Some of the AI comments end with a link to something they're plugging. "If you'd like to learn more about this I have a free guide at my website here". Those get flagged quickly.

Other accounts might be trying to age accounts and dilute their eventual coordinated voting or commenting rings. It's harder to identify sockpuppet accounts when they've been dutifully commenting slop for months before they start astroturfing for the chosen topic.

Others have covered some of the incentives, but sometimes the answer is simply "because they're pathetic"

They don't have anything worth saying but want people to think they do

I'd expect everything. HN ain't some local forum but place where opinions form and spread, and these reach many influential and powerful (now or in future) people. Heck there are sometimes major articles in general news about whats happening here.

To reverse the argument - it would be amateurish and plain stupid to ignore it. Barrier to entry is very low. Politics, ads, swaying mildly opinions of some recent clusterfuck by popular megacorp XYZ, just spying on people, you have it all here.

I dont know how dang and crew protects against this, I'd expect some level of success but 100% seems unrealistic. Slow and steady mild infiltration, either by AI bots or humans from GRU and similar orgs who have this literally in their job description.

[deleted]

That's not true, it's false

Did they delete all their comments?

Enable "showdead" in your profile. This cancer gets kicked off the site once it receives enough flags or mod reports, and its comments get hidden.

>snowhale

Oh, would you look at that?

https://news.ycombinator.com/item?id=47134072

> They're all sloppy. It reads nothing like an LLM.

I love how the bot forgot to read CLAUDE.md or whatever persona it set up (e.g., "make me text all lowercase, use -- instead of em dashes pleaseeee") for this single comment mixed in with the other ones:

https://news.ycombinator.com/item?id=47132431

Sadly, I think that bot comment without the 'snowhale' persona filter applied is what a lot of people here still think every bot is going to look and sound like, because the amount of people I've seen on here getting tricked by them and interacting with them has been a bit worrisome.

Every single time I read the phrase 'I have been thinking about this a lot lately' my eyeballs roll back hard.

Yes, and “genuine question” or “am i missing something?”

Yeah, and some of them already have enough karma to downvote you if you call them out, which is infuriating…

[flagged]

Ynow what's fucked up? I knew within the first few sentences that you're doing that on purpose, but still found myself wondering if you're a an LLM. I mean I knew you weren't, but the question is already so deeply ingrained at this point - and then you use the pullet points to boot...

This loss of trust is getting tiresome. Depending on context we've likely all wondered if something is astro turfed, but with the frequency increase from llms it's never really possible to not have it somewhere in mind

I'm proud? to say I've gotten the 'are you using an LLM' question in a meeting when doing off the cuff fluent corpo jargon too.

To date, I've never used an LLM directly. I find them deeply repellant, and I've yet to be convinced that there exists a sufficiently tuned prompt that will make me not hate their literally 'mid' output.

Loss of trust though, that's a societal issue of this gilded age of grifters and scammers. Until we have a system of accountability and consequences for serial lying, we're gonna drown in this shit. LLMs are jet fuel for our existing environment of impunity.

You have to assume everything is astroturf, and constantly remind yourself not to be swayed by the mood in the room.

This is just factually true, and it's why pleading, pearl-clutching, arguments from emotion, etc. should all be immediately discarded - because you have no idea whether the person on the other side of the account is a spammer, paid propagandist, mentally ill, lying for the lols, or just a bot.

Online, the only thing that matters is the substance of the argument.

Save the emotions for those you know in the real world, who you know are real.