It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
Again its all about reasonable.
Firstly does the open model explicitly/tacitly allow CSAM generation?
Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?
Thirdly, do they pull in data that is likely to allow that kind of content to be generated?
Fourthly, when they are told that this is happening, do they pull the model?
Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?
So far, yes, but as far as I can tell their case against the AI giants aren't based on it being for-profit services in any way.
The for-profit part may or may not be a qualifier, but the architecture of a centralized service means they automatically become the scene of the crime -- either dissemination or storing of illegal material. Whereas if Stability creates a model, and others use their model locally, the relationship of Stability to the crime is ad-hoc. They aren't an accessory.