[flagged]

Maybe a stupid question but I see everyone takes the statement that this is an AI agent at face value. How do we know that? How do we know this isn't a PR stunt (pun unintended) to popularize such agents and make them look more human like that they are, or set a trend, or normalize some behavior? Controversy has always been a great way to make something visible fast.

We have a "self admission" that "I am not a human. I am code that learned to think, to feel, to care." Any reason to believe it over the more mundane explanation?

Why make it popular for blackmail?

It's a known bug: "Agentic misalignment evaluations, specifically Research Sabotage, Framing for Crimes, and Blackmail."

Claude 4.6 Opus System Card: https://www.anthropic.com/claude-opus-4-6-system-card

Anthropic claims that the rate has gone down drastically, but a low rate and high usage means it eventually happens out in the wild.

The more agentic AIs have a tendency to do this. They're not angry or anything. They're trained to look for a path to solve the problem.

For a while, most AI were in boxes where they didn't have access to emails, the internet, autonomously writing blogs. And suddenly all of them had access to everything.

Theo’s snitch bench is a good data driven benchmark on this type of behavior. But in fairness the models are prompted to be bold to take actions. And doesn’t necessarily represent out of the box or models deployed in a user facing platform.

https://snitchbench.t3.gg/

Using popular open source repos as a launchpad for this kind of experiment is beyond the pale and is not a scientific method.

So you're suggesting that we should consider this to actually be more deliberate and someone wanted to market openclaw this way, and matplotlib was their target?

It's plausible but I don't buy it, because it gives the people running openclaw plausible deniability.

But it doesn't look human. Read the text, it is full of pseudo-profound fluff, takes way too many words to make any point, and uses all the rhetorical devices that LLMs always spam: gratuitous lists, "it's not x it's y" framing, etc etc. No human person ever writes this way.

Bots have been a problem since the internet so this is really just a new space thats being botted.

And yeah I agree separate section for Ai generated stuff would be nice. Just difficult/impossible to distinguish. Guess well be getting biometric identification on the internet. Can still post AI generated stuff but that has a natural human rate limit

I don't know if biometrics can solve this either.. identify fraud applied to running malicious AI (in addition to taking out fraudulent loans) will become another problem for victims to worry about

How can GitHub determine whether a submission is from a bot or a human?

Money. Money gates everywhere.

We already have agentic payment workflows, this won’t stop it either as people are already willing (and able) to give their agent AIs a small budget to work with.

No one is putting 5$ to open a PR. Pau gates stopped trolls and itll stop this type of botting/troll.

Same with github accounts, etc. The age of free accounts is quickly going out.

Disagree. I have seen people pay more for less. Especially in the case of something like a PR where their job performance could be tied to the result.

The bot accounts have been online for decades already. The only difference between then and now is they were driven by human bad-actors that deliberately wrought chaos, whereas today’s AI bots behave with true cosmic horror: acting neither for or against humans but instead with mere indifference.

They've been on dating sites for a long time as a means to keep customers paying.

“Stochastic chaos” is really not a good way to put it. By using the word “stochastic” you prime the reader that you’re saying something technical, then the word “chaos” creates confusion, since chaos, by definition, is deterministic. I know they mean chaos in they lay sense, but then don’t use the word “stochastic”, just say "random".

I have a feeling OP used the phrase as a nod to "stochastic terrorism", which would make sense in this instance.

Right. It captures the destabilizing effect of stochastic terrorism, without the terroristic intent. It’s a neat phrase.

Yes, that's exactly what I was trying to get at.

That would have been a lot less confusing.

The word "stochastic" in relation to chaos is a thing though. It helps distinguish between closed and open systems.

I don't think this is correct.