It seems people have a rather short memory when it comes to twitter. When it was still run by Jack Dorsey, CP was abundant on twitter and there was little effort to tamp down on it. After Musk bought the platform, he and Dorsey had a public argument in which Dorsey denied the scale of the problem or that old twitter was aware of it and had shown indifference. But Musk actually did take tangible steps to clean it up and many accounts were banned. It's curious that there wasn't nearly the same level of outrage from the morally righteous HN crowd towards Mr. Dorsey back then as there is in this thread.

Didn't X unban users like Dom Lucre who posted CSAM because of their political affiliation?

Didn't Reddit have the same problem until they got negative publicity and were basically forced to clean it up? What is with these big tech companies and CP?

Not exactly. Reddit always took down CSAM (how effectively I don't know, but I've been using the site consistently since 2011 and I've never come across it).

What Reddit did get a lot of negative public publicity for were subreddits focused on sharing non-explicit photos of minors, but with loads of sexually charged comments. The images themselves, nobody would really object to in isolation, but the discussions surrounding the images were all lewd. So not CSAM, but still creepy and something Reddit tightly decided it didn't want on the site.

Reddit was forced to clean it up when they started eyeballing an IPO.

[dead]

Having an issue with users uploading CSAM (a problem for every platform) is very different from giving them a tool to quickly and easily generate CSAM, with apparently little-to-no effort to prevent this from happening.

If the tool generates it automatically or spuriously, then yes. But if it is the users asking it to, then I'm not sure there is a big difference.

Well, its worth noting that with the nonconsensual porn, child and otherwise, it was generating X would often rapidly punish the user that posted the prompt, but leave the grok-generated content up. It wasn't an issue of not having control, it was an issue of how the control was used.

> But Musk actually did take tangible steps to clean it up and many accounts were banned.

Mmkay.

https://en.wikipedia.org/wiki/Twitter_under_Elon_Musk#Child_...

"As of June 2023, an investigation by the Stanford Internet Observatory at Stanford University reported "a lapse in basic enforcement" against child porn by Twitter within "recent months". The number of staff on Twitter's trust and safety teams were reduced, for example, leaving one full-time staffer to handle all child sexual abuse material in the Asia-Pacific region in November 2022."

"In 2024, the company unsuccessfully attempted to avoid the imposition of fines in Australia regarding the government's inquiries about child safety enforcement; X Corp reportedly said they had no obligation to respond to the inquiries since they were addressed to "Twitter Inc", which X Corp argued had "ceased to exist"."

When did Jack Dorsey unban personal friends of his that had gotten banned for posting CSAM?

I meant to reply to you with this: https://news.ycombinator.com/item?id=46886801