[flagged]

The different factors are scale (now "deepfakes" can be automatically produced), and endorsement. It is significant that all these images aren't being posted by random users, they are appearing under the company's @grok handle. Therefore they are speech by X, so it's X that's getting raided.

There is no content like that on Bluesky nor Mastadon. Show the evidence

> There is no content like that on [...] Mast(o)don.

How can you say that nobody is posting CSAM on a massive decentralized social network with thousands of servers?

https://bsky.social/about/blog/01-17-2025-moderation-2024

"In 2024, Bluesky submitted 1,154 reports for confirmed CSAM to the National Centre for Missing and Exploited Children (NCMEC). Reports consist of the account details, along with manually reviewed media by one of our specialized child safety moderators. Each report can involve many pieces of media, though most reports involve under five pieces of media."

If it wasn't there, there would be no reports.

But that is the difference, they actually do something against it.