I agree that users who break the law must be prosecuted. But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.
Agreed. Let's try to be less divisive. Everyone has got a fair point.
Yes, AI chatbots have to do everything in their power to avoid users easily generating such content.
AND
Yes, people that do so (even if done so on your self-hosted model) have to be punished.
I believe it is OK that Grok is being investigated because the point is to figure out whether this was intentional or not.
Just my opinion.
>But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.
The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.
We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.
That is not the argument. No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface, but X provides not only an AI model, but a platform and distribution as well, so that is inherently different
> No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface,
If LLMs should have guardrails, why should open source ones be exempt? What about people hosting models on hugging face? WHat if you use a model both distributed by and hosted by hugging face.
No it is not. X is dumb pipe. You have humans on both ends. Arrest them, summary execute them whatever. You go after X because it is a choke point and easy.
First you argue about the model, now the platform. Two different things.
If a platform encourages and doesn’t moderate at all, yes we should go after the platform.
Imagine a newspaper publishing content like that, and saying they are not responsible for their journalists
> X is dumb pipe.
X also actively distributes and profits off of CSAM. Why shouldn't the law apply to distribution centers?
There's a slippery slope version of your argument where your ISP is responsible for censoring content that your government does not like.
I mean, I thought that was basically already the law in the UK.
I can see practical differences between X/twitter doing moderation and the full ISP censorship, but I cannot see any differences in principle...
We don't consider warehouses & stores to be a "slippery slope" away from toll roads, so no I really don't see any good faith slippery slope argument that connects enforcing the law against X to be the same as government censorship of ISPs.
I mean even just calling it censorship is already trying to shove a particular bias into the picture. Is it government censorship that you aren't allowed to shout "fire!" in a crowded theater? Yes. Is that also a useful feature of a functional society? Also yes. Was that a "slippery slope"? Nope. Turns out people can handle that nuance just fine.
X is most definitely not a dumb pipe, you also have humans beside the sender and receiver choosing what content (whether directly or indirectly) is promoted for wide dissemination, relatively suppressed, or outright blocked.
If you have a recommandation algorithm you are not a dumb pipe
3D printers don't synthesize content for you though. If they could generate 3D models of CSAM from thin air and then print them, I'm sure they'd be investigated too if they were sold with no guardrails in place.
You are literally trolling. No one is banning AI entirely. However AI shouldn't spit out adult content. Let's not enable people harm others easily with little to no effort.