Good one.
No platform ever should allow CSAM content.
And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.
This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform
It seems people have a rather short memory when it comes to twitter. When it was still run by Jack Dorsey, CP was abundant on twitter and there was little effort to tamp down on it. After Musk bought the platform, he and Dorsey had a public argument in which Dorsey denied the scale of the problem or that old twitter was aware of it and had shown indifference. But Musk actually did take tangible steps to clean it up and many accounts were banned. It's curious that there wasn't nearly the same level of outrage from the morally righteous HN crowd towards Mr. Dorsey back then as there is in this thread.
Didn't X unban users like Dom Lucre who posted CSAM because of their political affiliation?
Having an issue with users uploading CSAM (a problem for every platform) is very different from giving them a tool to quickly and easily generate CSAM, with apparently little-to-no effort to prevent this from happening.
If the tool generates it automatically or spuriously, then yes. But if it is the users asking it to, then I'm not sure there is a big difference.
Well, its worth noting that with the nonconsensual porn, child and otherwise, it was generating X would often rapidly punish the user that posted the prompt, but leave the grok-generated content up. It wasn't an issue of not having control, it was an issue of how the control was used.
Didn't Reddit have the same problem until they got negative publicity and were basically forced to clean it up? What is with these big tech companies and CP?
Not exactly. Reddit always took down CSAM (how effectively I don't know, but I've been using the site consistently since 2011 and I've never come across it).
What Reddit did get a lot of negative public publicity for were subreddits focused on sharing non-explicit photos of minors, but with loads of sexually charged comments. The images themselves, nobody would really object to in isolation, but the discussions surrounding the images were all lewd. So not CSAM, but still creepy and something Reddit tightly decided it didn't want on the site.
Reddit was forced to clean it up when they started eyeballing an IPO.
[dead]
> But Musk actually did take tangible steps to clean it up and many accounts were banned.
Mmkay.
https://en.wikipedia.org/wiki/Twitter_under_Elon_Musk#Child_...
"As of June 2023, an investigation by the Stanford Internet Observatory at Stanford University reported "a lapse in basic enforcement" against child porn by Twitter within "recent months". The number of staff on Twitter's trust and safety teams were reduced, for example, leaving one full-time staffer to handle all child sexual abuse material in the Asia-Pacific region in November 2022."
"In 2024, the company unsuccessfully attempted to avoid the imposition of fines in Australia regarding the government's inquiries about child safety enforcement; X Corp reportedly said they had no obligation to respond to the inquiries since they were addressed to "Twitter Inc", which X Corp argued had "ceased to exist"."
When did Jack Dorsey unban personal friends of his that had gotten banned for posting CSAM?
I meant to reply to you with this: https://news.ycombinator.com/item?id=46886801
My natural reaction here is like I think most others; that yes Grok / X bad, shouldn't be able to generate CSAM content / deepfakes.
But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user. I've seen a few arguments here that mostly comes down to:
1. It's Grok the LLM generating the content, not the user.
2. The distribution. That this isn't just on the user's computer but instead posted on X.
For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.
For 2. if Grok instead generated the content for download would the liability go away? What if Grok generated the content to be downloaded only and then the user uploaded manually to X? If in this case Grok isn't liable then why does the automatic posting (from the user's instructions) make it different? If it is, then it's not about the distribution anymore.
There are some comparisons to photoshop, that if i created a deep fake with photoshop that I'm liable not Adobe. If photoshop had a "upload to X" button, and I created CSM using photoshop and hit the button to upload to X directly, is now Adobe now liable?
What am I missing?
> But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user.
This seems to rest on false assumptions that: (1) legal liability is exclusive, and (2) investigation of X is not important both to X’s liability and to pursuing the users, to the extent that they would also be subject to liability.
X/xAI may be liable for any or all of the following reasons:
* xAI generated virtual child pornography with the likenesses of actual children, which is generally illegal, even if that service was procured by a third party.
* X and xAI distributed virtual child pornography with the likenesses of actual children, which is generally illegal, irrespective of who generated and supplied them.
* To the extent that liability for either of the first two bullet points would be eliminated or mitigated by absence of knowledge at the time of the prohibited content and prompt action when the actor became aware, X often punished users for the prompts proucing the virtual child pornography without taking prompt action to remove the xAI-generated virtual child pornography resulting from the prompt, demonstrating knowledge and intent.
* When the epidemic of grok-generated nonconsensual, including child, pornography drew attention, X and xAI responded by attempting to monetize the capacity by limiting the tool to only paid X subscribers, showing an attempt to commercially profit from it, which is, again, generally illegal.
> For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.
LLMs are completely different to programming languages or even Photoshop.
You can't type a sentence and within 10 seconds get images of CSAM with Photoshop. LLMs are also built on trained material, unlike the traditional tools in Photoshop. There have been plenty CSAM found in the training data sets, but shock-horror apparently not enough information to know "where it came from". There's a non-zero chance that this CSAM Grok is vomiting out is based on "real" CSAM of people being abused.
> What am I missing?
A deep hatred of Elon Musk
> But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user.
Because Grok and X aren't even doing the most basic filtering they could do to pretend to filter out CSAM.
Filtering on the platform or Grok output though? If the filtering / flagging on X is insufficient then that is a separate issue independent of Grok. If filtering output of Grok, while irresponsible in my view, I don’t see why that’s different from say photoshop not filtering its output.
Agreed. For anyone curious, here's the UK report from the National Society for the Prevention of Cruelty to Children (NSPCC) from 2023-2024.
https://www.bbc.com/news/articles/cze3p1j710ko
Reports on sextortion, self-generated indecent images, and grooming via social media/messaging apps:
Snapchat 54%
Instagram 11%
Facebook 7%
WhatsApp 6-9%
X 1-2%
Are those numbers in the article somewhere? From what I read it says that out of 7,062 cases, the platform was known for only 1,824. Then it says Snapchat accounts for 48% (not 54%). I don't see any other percentages.
What are the percentages?
Edited to add clarification.
The meaning of the percentages is still unclear.
The lack of guardrails wasn’t a carelessness issue - Grok has many restrictions and Elon regularly manipulates the answers it gives to suit his political preferences - but rather one of several decisions to offer largely unrestricted AI adult content generation as a unique selling point. See also, e.g. the lack of real age verification on Ani’s NSFW capabilities.
I disagree. Prosecute people that use the tools, not the tool makers if AI generated content is breaking the law.
A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.
I agree that users who break the law must be prosecuted. But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.
Agreed. Let's try to be less divisive. Everyone has got a fair point.
Yes, AI chatbots have to do everything in their power to avoid users easily generating such content.
AND
Yes, people that do so (even if done so on your self-hosted model) have to be punished.
I believe it is OK that Grok is being investigated because the point is to figure out whether this was intentional or not.
Just my opinion.
>But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.
The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.
We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.
That is not the argument. No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface, but X provides not only an AI model, but a platform and distribution as well, so that is inherently different
> No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface,
If LLMs should have guardrails, why should open source ones be exempt? What about people hosting models on hugging face? WHat if you use a model both distributed by and hosted by hugging face.
No it is not. X is dumb pipe. You have humans on both ends. Arrest them, summary execute them whatever. You go after X because it is a choke point and easy.
First you argue about the model, now the platform. Two different things.
If a platform encourages and doesn’t moderate at all, yes we should go after the platform.
Imagine a newspaper publishing content like that, and saying they are not responsible for their journalists
> X is dumb pipe.
X also actively distributes and profits off of CSAM. Why shouldn't the law apply to distribution centers?
There's a slippery slope version of your argument where your ISP is responsible for censoring content that your government does not like.
I mean, I thought that was basically already the law in the UK.
I can see practical differences between X/twitter doing moderation and the full ISP censorship, but I cannot see any differences in principle...
We don't consider warehouses & stores to be a "slippery slope" away from toll roads, so no I really don't see any good faith slippery slope argument that connects enforcing the law against X to be the same as government censorship of ISPs.
I mean even just calling it censorship is already trying to shove a particular bias into the picture. Is it government censorship that you aren't allowed to shout "fire!" in a crowded theater? Yes. Is that also a useful feature of a functional society? Also yes. Was that a "slippery slope"? Nope. Turns out people can handle that nuance just fine.
X is most definitely not a dumb pipe, you also have humans beside the sender and receiver choosing what content (whether directly or indirectly) is promoted for wide dissemination, relatively suppressed, or outright blocked.
If you have a recommandation algorithm you are not a dumb pipe
3D printers don't synthesize content for you though. If they could generate 3D models of CSAM from thin air and then print them, I'm sure they'd be investigated too if they were sold with no guardrails in place.
You are literally trolling. No one is banning AI entirely. However AI shouldn't spit out adult content. Let's not enable people harm others easily with little to no effort.
If you had argued that it’s impossible to track what is made on local models, and we can no longer maintain hashes of known CP, it would have been a fair statement of current reality.
——-
You’ve said that whatever is behind door number 1 is unacceptable.
Behind door number 2, “holding tool users responsible”, is tracking every item generated via AI, and being able to hold those users responsible.
If you don’t like door number 2, we have door number 3 - which is letting things be.
For any member of society, opening door 3 is straight out because the status quo is worse than reality before AI.
If you reject door 1 though, you are left with tech monitoring. Which will be challenged because of its invasive nature.
Holding Platforms responsible is about the only option that works, at least until platforms tell people they can’t do it.
Behind door number 4 is whenever you find a crime, start investigation and get a warrant. You will only need a couple of cases to send chilling enough effects.
You won't find much agreement with your opinion amongst most people. No matter of many "this should and this shouldn't" is written into text by single individual, thats not how morals work.
But how would we bring down our political boogieman Elon Musk if we take that approach?
Everything I read from X's competitors in the media tells me to hate X, and hate Elon.
If we prosecute people not tools, how are we going to stop X from hurting the commercial interests of our favourite establishment politicians and legacy media?
Corporations are also people.
(note that this isn't a raid on Musk personally! It's a raid on X corp for the actions of X corp and posts made under the @grok account by X corp)
People defending allowing CSAM content was definitely not on my bingo card for 2026.
All free speech discussions lead here sadly.
Fucked up times we live in
How? X is hostile to any party attempting to bring justice to its users that are breaking the law. This is a last recourse, after X and its owner stated plainly that they don't see anything wrong with generating CSAM or pornographic images of non-consenting people, and that they won't do anything about it.
Court order, ip of users, sue the users. It is not X job to bring justice.
X will not provide these informations to the French Justice System. What then? Also insane that you believe the company that built a "commit crime" button bears no responsibility whatsoever in this debacle.
It is illegal in USA too, so the french authorities would absolutely have no problems getting assistance from the USA ones.
Elon Musk spent a lot of money getting his pony elected you think he isn't going to ride it?
You really believe that? You think the Trump administration will force Musk's X to give the French State data about its users so CSAM abusers can be prosecuted there? This is delusional, to say the least. And let's not even touch on the subject of Trump and Musk both being actual pedophiles themselves.
[flagged]
Enforcement of anti-CSAM law has been a significant thing for a long time. It's in no way "only now". Even the "free speech" platforms banned it because they knew they would get raided otherwise. There are long standing tools for dealing with it, such as a database of known hashes of material. There's even a little box you can tick in Cloudflare to automatically check outgoing material from your own site against that database - because this is a strict liability offence, and you are liable if other people upload it to you where it can be re-downloaded.
What's new is that X automated the production of obscene or sexualised images by providing grok. This was also done in a way that confronted everyone; it's very different from a black market, this is basically a harassment tool for use against women and girls.
> What's new is that X automated the production of obscene or sexualised images by providing grok.
Yes we are now dealing with an automated Photoshop. And somehow the people in charge have decided to do something about it, probably more for political or maybe darker reasons.
So let me make a suggestion: maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?
Hit them hard at the money level... it wouldn't be more authoritarian than something like ChatControl or restricting access to VPNs.
And actually all the mechanisms are already in place to implement something like that.
> Yes we are now dealing with an automated Photoshop. And somehow the people in charge have decided to do something about it, probably more for political or maybe darker reasons.
I don't get what's difficult to understand or believe here. Grok causes a big issue in practice right now, a larger issue than photoshop, and it should be easy for X to regulate it themselves like the competition does but they don't, so the state intervenes.
> maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?
You're basically asking "why do a surgical strike when you can do carpet bombing"? A surgical strike is used to target the actual problem. With carpet bombing you mostly cause collateral damage.
> it should be easy for X to regulate it themselves like the competition does but they don't
Yes they do regulate it. But then people find exploits just like the competition.
I don't think that's a candid description of how X handled this.
I don’t think saying other people aren’t candid is polite or advances the conversation.
Just calling out the false equivalence (Grok self-regulation: dragging their feet and doing the absolute minimum too late after deflecting all blame on the users, while the competition proactively tries to harden the models against such use)
Grok's always proactively had limits on adult content frm the day it was first released public. There's equivalence, you're stating that it's false but I haven't seen any reason to think that. I'm calling out the hypocrisy.
> maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?
I'm sorry, but I don't understand any of the arguments above.
If we presume a dark control motivation then having shares in the entities you want to control is the best form of control there is.
[flagged]
The different factors are scale (now "deepfakes" can be automatically produced), and endorsement. It is significant that all these images aren't being posted by random users, they are appearing under the company's @grok handle. Therefore they are speech by X, so it's X that's getting raided.
There is no content like that on Bluesky nor Mastadon. Show the evidence
> There is no content like that on [...] Mast(o)don.
How can you say that nobody is posting CSAM on a massive decentralized social network with thousands of servers?
https://bsky.social/about/blog/01-17-2025-moderation-2024
"In 2024, Bluesky submitted 1,154 reports for confirmed CSAM to the National Centre for Missing and Exploited Children (NCMEC). Reports consist of the account details, along with manually reviewed media by one of our specialized child safety moderators. Each report can involve many pieces of media, though most reports involve under five pieces of media."
If it wasn't there, there would be no reports.
But that is the difference, they actually do something against it.
https://blog.x.com/en_us/topics/company/2023/an-update-on-ou...
[flagged]
There are multiple valid reasons to fight realistic computer-generated CSAM content.
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder
I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized.
> specifically in many of the grok cases it harms young victims that were used as templates for the material.
What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them?
Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally.
If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").
You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
> What is the criteria for this?
My criteria would be victims suffering personally from the generated material.
The "no harm" argument only really applies if victims and their social bubble never find out about the material (but that did happen, sometimes intentionally, in many cases).
You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.
> If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").
> You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
I don't know about that. Would "I didn't know it was real" really count as a legal defense?
> I don't know about that. Would "I didn't know it was real" really count as a legal defense?
Absolutely-- prosecution would presumably need to at least show that you could have known the material was "genuine".
This could be a huge legal boon for prosecuted "direct customers" and co-perpetrators that can only be linked via shared material.
I'm really not convinced. This sounds very idealistic to me. The "justice" system is way more brutal in real life.
> You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
But that reason is highly problematic. Laws should be able to stand on their own for their reasons. Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
> You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.
I thought you were saying that the kids who were in the dataset that the model was trained on would be harmed. I agree with what I assume you meant based on your reply, which is people who had their likeness altered are harmed.
> Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
I don't understand how that's the same reasoning at all... Encryption serves ones individual privacy and preserves it against malicious actors. I'd guess that's a fundamental right in most jurisdictions, globally.
We're talking CSAM here and shifting its creation into the virtual world through some GenAI prompts. Just because that content has been created artificially, doesn't make its storage and distribution any more legal.
It isn't some reductionist "this makes enforcement of other laws harder", but it's rather that the illegal distribution of artificially generated content acts as fraudulent obstruction in the prosecution of authentic, highly illegal, content - content with malicious actors and physically affected victims.
> Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
Yes. I almost completely agree with your outlook, but I think that many of our laws trade such individual freedoms for better society-wide outcomes, and those are often good tradeoffs.
Just consider gun legislation, driving licenses, KYC laws in finance, etc: Should the state have any business interfering there? I'd argue in isolation (ideally) not; but all those lead to huge gains for society, making it much less likely to be murdered by intoxicated drivers (or machine-gunners) and limit fraud, crime and corruption.
So even if laws look kinda bad from a purely theoretical-ethics point of view it's still important to look at the actual effects that they have before dismissing them as unjust in my view.
Laws against money laundering come to kind. It's illegal for you to send money from your legal business to my personal account and for me to send it from my personal account to your other legal business, not because the net result is illegal, but because me being in the middle makes it harder for "law enforcement" to trace the transaction.
> How about "R-Rated" violence like we see in popular movies?
Movie ratings are a good example of a system for restricting who sees unacceptable content, yes.
More to the point, now that most productions are using intimacy coordinators, there's a degree of certainty around the consent of R-rated images.
There's basically no consent with what Grok is doing.
> There's basically no consent with what Grok is doing.
Wait how do you get consent from people that don't exist?
AI fake nudes were made from very real and very alive people
This part of the thread wasn't about that.
> I remember when CSAM meant actual children not computer graphics.
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
There still needs to be sexual abuse depicted, no? Just naked kids should not be an issue, right?
If I found a folder with a hundred images of naked kids on your PC, I would report you to authorities, regardless of what pose kids are depicted in. So I guess the answer is no.
In US law it seems the definition of CSAM does not include naked minors that do not show sexually explicit conduct: https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
Naked kid pictures intended for sexual gratification are illegal in most countries
Hard to know the intent of a picture in most cases. E.g. there used to be a magazine for teens when I grew up showing a picture of a naked adolescent of each sex in every edition (Bravo Dr Sommer). The intent was to educate teens and to make them feel less ashamed. I bet there were people who used these for sexual gratification. Should that have been a reason to ban them? I don't think so.
Educational nudity where the subject consented possible for teenagers over 16 in Germany (and the publication complied with the law) is not the same category as CSAM or non-consensual sexual imagery. In the former, misuse by a minority doesn’t automatically make the publication illegal. In the latter, the harm is intrinsic: a child cannot legally consent, and non-consensual sexual images are a direct rights violation.
Do you think the judge is stupid? The naked kid pics aren't printed in an anatomical textbook. Because they're AI hallucinations, I doubt they're even anatomically correct.
[flagged]
A generated picture of a family member in a bikini is an issue?
I don't see it...
Generated by a stranger, then posted online on X for the whole world to see. Are you really OK with that, even if the subject was your 10 years old daughter?
We have privacy laws forbidding anyone to share most pictures of people without their consent independent of what they are wearing on that picture. This is not new. That stranger should be investigated. I don't see why it needs to be CSAM for us to be upset about it.
Yes, and these laws are getting blatantly broken, while neither X nor the US Government are willing to do anything about it. Which is why French authorities have decided to take the matter in their own hands.
[flagged]
[flagged]
There is a difference between running around in a bikini and people creating sexy pictures of yourself without consent.
You do understand that?
I do understand that, but in this thread there was no mention of anything sexy so far. I am just pedantic because I think it is important when it comes to criminal accusations. Sexuality and nakedness are two different things.
Then check what the discussion is about, this is not about some funny ai pics
It started with CSAM but then derailed into people in bikinis ¯\_(ツ)_/¯
We already have laws against distributing CSAM. Let's jail the X CEO then
I think you are trolling here.
First you defend CSAM, then we were talking about sexualised images of people without their consent, you were trying to diminish this as „just bikini“ pics
[flagged]
Exactly! This should not be ok
[flagged]
What the hell?
As a father there shouldn’t be any CSAM content anywhere.
And think about that it is already proven these models apparently had CSAM content in their training data.
Also what about the nudes of actual people? That is invasion of privacy
I am shocked that we are even discussing this.
Is this post low-key advocating for anime csam in the name of freedom?
[flagged]
They are not doing anything that is the problem. X just doesn’t care at all.
But they are happy to censor for countries like turkey
Hypocrisy level 1 million
[flagged]
[flagged]