My natural reaction here is like I think most others; that yes Grok / X bad, shouldn't be able to generate CSAM content / deepfakes.

But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user. I've seen a few arguments here that mostly comes down to:

1. It's Grok the LLM generating the content, not the user.

2. The distribution. That this isn't just on the user's computer but instead posted on X.

For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.

For 2. if Grok instead generated the content for download would the liability go away? What if Grok generated the content to be downloaded only and then the user uploaded manually to X? If in this case Grok isn't liable then why does the automatic posting (from the user's instructions) make it different? If it is, then it's not about the distribution anymore.

There are some comparisons to photoshop, that if i created a deep fake with photoshop that I'm liable not Adobe. If photoshop had a "upload to X" button, and I created CSM using photoshop and hit the button to upload to X directly, is now Adobe now liable?

What am I missing?

> But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user.

This seems to rest on false assumptions that: (1) legal liability is exclusive, and (2) investigation of X is not important both to X’s liability and to pursuing the users, to the extent that they would also be subject to liability.

X/xAI may be liable for any or all of the following reasons:

* xAI generated virtual child pornography with the likenesses of actual children, which is generally illegal, even if that service was procured by a third party.

* X and xAI distributed virtual child pornography with the likenesses of actual children, which is generally illegal, irrespective of who generated and supplied them.

* To the extent that liability for either of the first two bullet points would be eliminated or mitigated by absence of knowledge at the time of the prohibited content and prompt action when the actor became aware, X often punished users for the prompts proucing the virtual child pornography without taking prompt action to remove the xAI-generated virtual child pornography resulting from the prompt, demonstrating knowledge and intent.

* When the epidemic of grok-generated nonconsensual, including child, pornography drew attention, X and xAI responded by attempting to monetize the capacity by limiting the tool to only paid X subscribers, showing an attempt to commercially profit from it, which is, again, generally illegal.

> What am I missing?

A deep hatred of Elon Musk

> But I am having trouble justifying in an consistent manner why Grok / X should be liable here instead of the user.

Because Grok and X aren't even doing the most basic filtering they could do to pretend to filter out CSAM.

Filtering on the platform or Grok output though? If the filtering / flagging on X is insufficient then that is a separate issue independent of Grok. If filtering output of Grok, while irresponsible in my view, I don’t see why that’s different from say photoshop not filtering its output.

> For 1. it seems to breakdown if we look more broadly at how LLMs are used. e.g. as a coding agent. We're basically starting to treat LLMs as a higher level framework now. We don't hold vendors of programming languages or frameworks responsible if someone uses them to create CSAM. Yes LLM generated the content, but the user still provided the instructions to do so.

LLMs are completely different to programming languages or even Photoshop.

You can't type a sentence and within 10 seconds get images of CSAM with Photoshop. LLMs are also built on trained material, unlike the traditional tools in Photoshop. There have been plenty CSAM found in the training data sets, but shock-horror apparently not enough information to know "where it came from". There's a non-zero chance that this CSAM Grok is vomiting out is based on "real" CSAM of people being abused.