I disagree. Prosecute people that use the tools, not the tool makers if AI generated content is breaking the law.
A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.
I agree that users who break the law must be prosecuted. But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.
Agreed. Let's try to be less divisive. Everyone has got a fair point.
Yes, AI chatbots have to do everything in their power to avoid users easily generating such content.
AND
Yes, people that do so (even if done so on your self-hosted model) have to be punished.
I believe it is OK that Grok is being investigated because the point is to figure out whether this was intentional or not.
Just my opinion.
>But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.
The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.
We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.
That is not the argument. No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface, but X provides not only an AI model, but a platform and distribution as well, so that is inherently different
> No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface,
If LLMs should have guardrails, why should open source ones be exempt? What about people hosting models on hugging face? WHat if you use a model both distributed by and hosted by hugging face.
No it is not. X is dumb pipe. You have humans on both ends. Arrest them, summary execute them whatever. You go after X because it is a choke point and easy.
First you argue about the model, now the platform. Two different things.
If a platform encourages and doesn’t moderate at all, yes we should go after the platform.
Imagine a newspaper publishing content like that, and saying they are not responsible for their journalists
> X is dumb pipe.
X also actively distributes and profits off of CSAM. Why shouldn't the law apply to distribution centers?
There's a slippery slope version of your argument where your ISP is responsible for censoring content that your government does not like.
I mean, I thought that was basically already the law in the UK.
I can see practical differences between X/twitter doing moderation and the full ISP censorship, but I cannot see any differences in principle...
We don't consider warehouses & stores to be a "slippery slope" away from toll roads, so no I really don't see any good faith slippery slope argument that connects enforcing the law against X to be the same as government censorship of ISPs.
I mean even just calling it censorship is already trying to shove a particular bias into the picture. Is it government censorship that you aren't allowed to shout "fire!" in a crowded theater? Yes. Is that also a useful feature of a functional society? Also yes. Was that a "slippery slope"? Nope. Turns out people can handle that nuance just fine.
X is most definitely not a dumb pipe, you also have humans beside the sender and receiver choosing what content (whether directly or indirectly) is promoted for wide dissemination, relatively suppressed, or outright blocked.
If you have a recommandation algorithm you are not a dumb pipe
3D printers don't synthesize content for you though. If they could generate 3D models of CSAM from thin air and then print them, I'm sure they'd be investigated too if they were sold with no guardrails in place.
You are literally trolling. No one is banning AI entirely. However AI shouldn't spit out adult content. Let's not enable people harm others easily with little to no effort.
If you had argued that it’s impossible to track what is made on local models, and we can no longer maintain hashes of known CP, it would have been a fair statement of current reality.
——-
You’ve said that whatever is behind door number 1 is unacceptable.
Behind door number 2, “holding tool users responsible”, is tracking every item generated via AI, and being able to hold those users responsible.
If you don’t like door number 2, we have door number 3 - which is letting things be.
For any member of society, opening door 3 is straight out because the status quo is worse than reality before AI.
If you reject door 1 though, you are left with tech monitoring. Which will be challenged because of its invasive nature.
Holding Platforms responsible is about the only option that works, at least until platforms tell people they can’t do it.
Behind door number 4 is whenever you find a crime, start investigation and get a warrant. You will only need a couple of cases to send chilling enough effects.
You won't find much agreement with your opinion amongst most people. No matter of many "this should and this shouldn't" is written into text by single individual, thats not how morals work.
But how would we bring down our political boogieman Elon Musk if we take that approach?
Everything I read from X's competitors in the media tells me to hate X, and hate Elon.
If we prosecute people not tools, how are we going to stop X from hurting the commercial interests of our favourite establishment politicians and legacy media?
Corporations are also people.
(note that this isn't a raid on Musk personally! It's a raid on X corp for the actions of X corp and posts made under the @grok account by X corp)
People defending allowing CSAM content was definitely not on my bingo card for 2026.
All free speech discussions lead here sadly.
Fucked up times we live in
How? X is hostile to any party attempting to bring justice to its users that are breaking the law. This is a last recourse, after X and its owner stated plainly that they don't see anything wrong with generating CSAM or pornographic images of non-consenting people, and that they won't do anything about it.
Court order, ip of users, sue the users. It is not X job to bring justice.
X will not provide these informations to the French Justice System. What then? Also insane that you believe the company that built a "commit crime" button bears no responsibility whatsoever in this debacle.
It is illegal in USA too, so the french authorities would absolutely have no problems getting assistance from the USA ones.
Elon Musk spent a lot of money getting his pony elected you think he isn't going to ride it?
You really believe that? You think the Trump administration will force Musk's X to give the French State data about its users so CSAM abusers can be prosecuted there? This is delusional, to say the least. And let's not even touch on the subject of Trump and Musk both being actual pedophiles themselves.