It's the same how HN mostly reacts with "don't censor AI!" when chat bots dare to add parental controls after they talk teenagers into suicide.

The community is often very selfish and opportunist. I learned that the role of engineers in society is to build tools for others to live their lives better; we provide the substrate on which culture and civilization take place. We should take more responsibility for it and take care of it better, and do far more soul-seeking.

Parental controls and settings in general are fine, I don't want Amodei or any other of those freaks trying to be my dad and censoring everything. At least Grok doesn't censor as heavily as the others and pretend to be holier than thou.

Talking to a chatbot yourself is much different from another person spinning up a (potentially malicious) AI agent and giving it permissions to make PRs and publish blogs. This tracks with the general ethos of self-responsibility that is semi-common on HN.

If the author had configured and launched the AI agent himself we would think it was a funny story of someone misusing a tool.

The author notes in the article that he wants to see the `soul.md` file, probably because if the agent was configured to publish malicious blog posts then he wouldn't really have an issue with the agent, but with the person who created it.