>So many projects now walk on eggshells so as not to disrupt sponsor flow or employment prospects.

In my experience, open-source maintainers tend to be very agreeable, conflict-avoidant people. It has nothing to do with corporate interests. Well, not all of them, of course, we all know some very notable exceptions.

Unfortunately, some people see this welcoming attitude as an invite to be abusive.

Nothing has convinced me that Linus Torvalds' approach is justified like the contemporary onslaught of AI spam and idiocy has.

AI users should fear verbal abuse and shame.

Perhaps a more effective approach would be for their users to face the exact same legal liabilities as if they had hand-written such messages?

(Note that I'm only talking about messages that cross the line into legally actionable defamation, threats, etc. I don't mean anything that's merely rude or unpleasant.)

This is the only way, because anything less would create a loophole where any abuse or slander can be blamed on an agent, without being able to conclusively prove that it was actually written by an agent. (Its operator has access to the same account keys, etc)

Legally, yes.

But as you pointed, not everything has legal liability. Socially, no, they should face worse consequences. Deciding to let an AI talk for you is malicious carelessness.

just put no agent produced code in the Code of Conduct document. People are use to getting shot into space for violating that thing little file. Point to the violation and ban the contributor forever and that will be that.

Liability is the right stick, but attribution is the missing link. When an agent spins up on an ephemeral VPS, harasses a maintainer, and vanishes, good luck proving who pushed the button. We might see a future where high-value open source repos require 'Verified Human' checks or bonded identities just to open a PR, which would be a tragedy for anonymity.

I’d hazard that the legal system is going to grind to a halt. Nothing can bridge the gap between content generating capability and verification effort.

Swift blocking and ignoring is what I would do. The AI has an infinite time and resources to engage a conversation at any level, whether it is polite refusal, patient explanation or verbal abuse, whereas human time and bandwidth is limited.

Additionally, it does not really feel anything - just generates response tokens based on input tokens.

Now if we engage our own AIs to fight this battle royale against such rogue AIs.......

But they’re not interacting with an AI user, they’re interacting with an AI. And the whole point is that AI is using verbal abuse and shame to get their PR merged, so it’s kind of ironic that you’re suggesting this.

AI may be too good at imitating human flaws.

> AI users should fear verbal abuse and shame.

This is quite ironic since the entire issue here is how the AI attempted to abuse and shame people.

Yes, Linus Torvalds is famously agreeable.

That's why he succeeded

> Well, not all of them, of course, we all know some very notable exceptions.

the venn diagram of people who love the abuse of maintaining an open source project and people who will write sincere text back to something called an OpenClaw Agent: it's the same circle.

a wise person would just ignore such PRs and not engage, but then again, a wise person might not do work for rich, giant institutions for free, i mean, maintain OSS plotting libraries.

So what’s the alternative to OSS libraries, Captain Wisdom?

we live in a crazy time where 9 of every 10 new repos being posted to github have some sort of newly authored solutions without importing dependencies to nearly everything. i don't think those are good solutions, but nonetheless, it's happening.

this is a very interesting conversation actually, i think LLMs satisfy the actual demand that OSS satisfies, which is software that costs nothing, and if you think about that deeply there's all sorts of interesting ways that you could spend less time maintaining libraries for other people to not pay you for them.