This but unironically. Preventing harm is good, actually.

Because it dumbs everything down, makes the output quality worse and more expensive, and removes personal agency and is dehumanizing. Plus, does it actually prevent harm, do we have evidence?

Finally, what is often missed is what if an actual good is decided harmful or something that is harmful is decided by AI company board XYZ to be “good”?

I think censorship is bad because of that danger. Quis custodiet ipsos custodes (who will watch the watchers).

Instead of throwing ourselves into that minefield of moral hazard, we should be lifting each other up to the tops of our ability and not infantilizing / secretly propagandizing each other.

Well, ideally at least.