There are multiple valid reasons to fight realistic computer-generated CSAM content.
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder
I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized.
> specifically in many of the grok cases it harms young victims that were used as templates for the material.
What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them?
Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally.
If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").
You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
> What is the criteria for this?
My criteria would be victims suffering personally from the generated material.
The "no harm" argument only really applies if victims and their social bubble never find out about the material (but that did happen, sometimes intentionally, in many cases).
You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.
> If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").
> You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
I don't know about that. Would "I didn't know it was real" really count as a legal defense?
> You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
But that reason is highly problematic. Laws should be able to stand on their own for their reasons. Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
> You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.
I thought you were saying that the kids who were in the dataset that the model was trained on would be harmed. I agree with what I assume you meant based on your reply, which is people who had their likeness altered are harmed.
> Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
I don't understand how that's the same reasoning at all... Encryption serves ones individual privacy and preserves it against malicious actors. I'd guess that's a fundamental right in most jurisdictions, globally.
We're talking CSAM here and shifting its creation into the virtual world through some GenAI prompts. Just because that content has been created artificially, doesn't make its storage and distribution any more legal.
It isn't some reductionist "this makes enforcement of other laws harder", but it's rather that the illegal distribution of artificially generated content acts as fraudulent obstruction in the prosecution of authentic, highly illegal, content - content with malicious actors and physically affected victims.
> Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
Yes. I almost completely agree with your outlook, but I think that many of our laws trade such individual freedoms for better society-wide outcomes, and those are often good tradeoffs.
Just consider gun legislation, driving licenses, KYC laws in finance, etc: Should the state have any business interfering there? I'd argue in isolation (ideally) not; but all those lead to huge gains for society, making it much less likely to be murdered by intoxicated drivers (or machine-gunners) and limit fraud, crime and corruption.
So even if laws look kinda bad from a purely theoretical-ethics point of view it's still important to look at the actual effects that they have before dismissing them as unjust in my view.
Laws against money laundering come to kind. It's illegal for you to send money from your legal business to my personal account and for me to send it from my personal account to your other legal business, not because the net result is illegal, but because me being in the middle makes it harder for "law enforcement" to trace the transaction.
> I remember when CSAM meant actual children not computer graphics.
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
If I found a folder with a hundred images of naked kids on your PC, I would report you to authorities, regardless of what pose kids are depicted in. So I guess the answer is no.
Hard to know the intent of a picture in most cases.
E.g. there used to be a magazine for teens when I grew up showing a picture of a naked adolescent of each sex in every edition (Bravo Dr Sommer).
The intent was to educate teens and to make them feel less ashamed.
I bet there were people who used these for sexual gratification. Should that have been a reason to ban them? I don't think so.
Educational nudity where the subject consented possible for teenagers over 16 in Germany (and the publication complied with the law) is not the same category as CSAM or non-consensual sexual imagery. In the former, misuse by a minority doesn’t automatically make the publication illegal. In the latter, the harm is intrinsic: a child cannot legally consent, and non-consensual sexual images are a direct rights violation.
Do you think the judge is stupid? The naked kid pics aren't printed in an anatomical textbook. Because they're AI hallucinations, I doubt they're even anatomically correct.
Generated by a stranger, then posted online on X for the whole world to see. Are you really OK with that, even if the subject was your 10 years old daughter?
We have privacy laws forbidding anyone to share most pictures of people without their consent independent of what they are wearing on that picture. This is not new.
That stranger should be investigated. I don't see why it needs to be CSAM for us to be upset about it.
Yes, and these laws are getting blatantly broken, while neither X nor the US Government are willing to do anything about it. Which is why French authorities have decided to take the matter in their own hands.
I do understand that, but in this thread there was no mention of anything sexy so far.
I am just pedantic because I think it is important when it comes to criminal accusations.
Sexuality and nakedness are two different things.
First you defend CSAM, then we were talking about sexualised images of people without their consent, you were trying to diminish this as „just bikini“ pics
There are multiple valid reasons to fight realistic computer-generated CSAM content.
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder
I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized.
> specifically in many of the grok cases it harms young victims that were used as templates for the material.
What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them?
Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally.
If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").
You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
> What is the criteria for this?
My criteria would be victims suffering personally from the generated material.
The "no harm" argument only really applies if victims and their social bubble never find out about the material (but that did happen, sometimes intentionally, in many cases).
You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.
> If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").
> You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
I don't know about that. Would "I didn't know it was real" really count as a legal defense?
> I don't know about that. Would "I didn't know it was real" really count as a legal defense?
Absolutely-- prosecution would presumably need to at least show that you could have known the material was "genuine".
This could be a huge legal boon for prosecuted "direct customers" and co-perpetrators that can only be linked via shared material.
I'm really not convinced. This sounds very idealistic to me. The "justice" system is way more brutal in real life.
> You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
But that reason is highly problematic. Laws should be able to stand on their own for their reasons. Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
> You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.
I thought you were saying that the kids who were in the dataset that the model was trained on would be harmed. I agree with what I assume you meant based on your reply, which is people who had their likeness altered are harmed.
> Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
I don't understand how that's the same reasoning at all... Encryption serves ones individual privacy and preserves it against malicious actors. I'd guess that's a fundamental right in most jurisdictions, globally.
We're talking CSAM here and shifting its creation into the virtual world through some GenAI prompts. Just because that content has been created artificially, doesn't make its storage and distribution any more legal.
It isn't some reductionist "this makes enforcement of other laws harder", but it's rather that the illegal distribution of artificially generated content acts as fraudulent obstruction in the prosecution of authentic, highly illegal, content - content with malicious actors and physically affected victims.
> Saying 'this makes enforcement of other laws harder' does not do that. You could use the same reasoning against encryption.
Yes. I almost completely agree with your outlook, but I think that many of our laws trade such individual freedoms for better society-wide outcomes, and those are often good tradeoffs.
Just consider gun legislation, driving licenses, KYC laws in finance, etc: Should the state have any business interfering there? I'd argue in isolation (ideally) not; but all those lead to huge gains for society, making it much less likely to be murdered by intoxicated drivers (or machine-gunners) and limit fraud, crime and corruption.
So even if laws look kinda bad from a purely theoretical-ethics point of view it's still important to look at the actual effects that they have before dismissing them as unjust in my view.
Laws against money laundering come to kind. It's illegal for you to send money from your legal business to my personal account and for me to send it from my personal account to your other legal business, not because the net result is illegal, but because me being in the middle makes it harder for "law enforcement" to trace the transaction.
> How about "R-Rated" violence like we see in popular movies?
Movie ratings are a good example of a system for restricting who sees unacceptable content, yes.
More to the point, now that most productions are using intimacy coordinators, there's a degree of certainty around the consent of R-rated images.
There's basically no consent with what Grok is doing.
> There's basically no consent with what Grok is doing.
Wait how do you get consent from people that don't exist?
AI fake nudes were made from very real and very alive people
This part of the thread wasn't about that.
> I remember when CSAM meant actual children not computer graphics.
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
There still needs to be sexual abuse depicted, no? Just naked kids should not be an issue, right?
If I found a folder with a hundred images of naked kids on your PC, I would report you to authorities, regardless of what pose kids are depicted in. So I guess the answer is no.
In US law it seems the definition of CSAM does not include naked minors that do not show sexually explicit conduct: https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
Naked kid pictures intended for sexual gratification are illegal in most countries
Hard to know the intent of a picture in most cases. E.g. there used to be a magazine for teens when I grew up showing a picture of a naked adolescent of each sex in every edition (Bravo Dr Sommer). The intent was to educate teens and to make them feel less ashamed. I bet there were people who used these for sexual gratification. Should that have been a reason to ban them? I don't think so.
Educational nudity where the subject consented possible for teenagers over 16 in Germany (and the publication complied with the law) is not the same category as CSAM or non-consensual sexual imagery. In the former, misuse by a minority doesn’t automatically make the publication illegal. In the latter, the harm is intrinsic: a child cannot legally consent, and non-consensual sexual images are a direct rights violation.
Do you think the judge is stupid? The naked kid pics aren't printed in an anatomical textbook. Because they're AI hallucinations, I doubt they're even anatomically correct.
[flagged]
A generated picture of a family member in a bikini is an issue?
I don't see it...
Generated by a stranger, then posted online on X for the whole world to see. Are you really OK with that, even if the subject was your 10 years old daughter?
We have privacy laws forbidding anyone to share most pictures of people without their consent independent of what they are wearing on that picture. This is not new. That stranger should be investigated. I don't see why it needs to be CSAM for us to be upset about it.
Yes, and these laws are getting blatantly broken, while neither X nor the US Government are willing to do anything about it. Which is why French authorities have decided to take the matter in their own hands.
[flagged]
[flagged]
There is a difference between running around in a bikini and people creating sexy pictures of yourself without consent.
You do understand that?
I do understand that, but in this thread there was no mention of anything sexy so far. I am just pedantic because I think it is important when it comes to criminal accusations. Sexuality and nakedness are two different things.
Then check what the discussion is about, this is not about some funny ai pics
It started with CSAM but then derailed into people in bikinis ¯\_(ツ)_/¯
We already have laws against distributing CSAM. Let's jail the X CEO then
I think you are trolling here.
First you defend CSAM, then we were talking about sexualised images of people without their consent, you were trying to diminish this as „just bikini“ pics
[flagged]
Exactly! This should not be ok
[flagged]
What the hell?
As a father there shouldn’t be any CSAM content anywhere.
And think about that it is already proven these models apparently had CSAM content in their training data.
Also what about the nudes of actual people? That is invasion of privacy
I am shocked that we are even discussing this.
Is this post low-key advocating for anime csam in the name of freedom?