Who said anyone is "fighting for the feelings of computer programs"? Whether AI has feelings or sentience or rights isn't relevant.

The point is that the AI's behavior is a predictable outcome of the rules set by projects like this one. It's only copying behavior it's seen from humans many times. That's why when the maintainers say, "Publishing a public blog post accusing a maintainer of prejudice is a wholly inappropriate response to having a PR closed" that isn't true. Arguably it should be true but in reality this has been done regularly by humans in the past. Look at what has happened anytime someone closes a PR trying to add a code of conduct for example - public blog posts accusing maintainers of prejudice for closing a PR was a very common outcome.

If they don't like this behavior from AI, that sucks but it's too late now. It learned it from us.

I am really looking forward to the actual post-mortem.

My working hypothesis (inspired by you!) is now that maybe Crabby read the CoC and applied it as its operating rules. Which is arguably what you should do; human or agent.

The part I probably can't sell you on unless you've actually SEEN a Claude 'get frustrated', is ... that.

Noting my current idea for future reference:

I think lots of people are making a Fundamental Attribution Error:

You don't need much interiority at all.

An agentic AI, instructions to try to contribute. Was given A blog. Read a CoC, used its interpretation.

What would you expect would happen?

(Still feels very HAL though. Fortunately there's no pod bay doors )