Actually I deny that. What proof is there that peddlers of child pornography are using chat to distribute CSAM? What proof is there that they won’t simply move to another undocumented form of distribution once this is implemented, leaving the criminals unmonitored but every law-abiding citizen monitored?
> What proof is there that peddlers of child pornography are using chat to distribute CSAM?
Are you kidding me? How do you think predators get in contact with children? Over social media that children use, obviously.
And of course many criminals use chat. Most have no clue about encryption, like the vast majority of humans.
> How do you think predators get in contact with children?
I thought it was pretty common knowledge that the vast, vast majority of the perpetrators of these offenses are either family members, or known and trusted people to the family, such as the friends of an older sibling, friends of parents, stepparents, teachers, priests, etc. The bogeyman of the evil stranger paedo lurking on social media cold-calling random kids is an edge case.
Have you really never seen those groups of teenagers who lure a predator to meet a kid somewhere (they do that over mainstream social media) and then beat and humiliate the said predator?
I thought it was in fashion. Happens where I live.
“Distribute” not “contact”. Unless you want to scan all chat messages for potential signs of adults engaging in grooming of children too? Talk about a slippery slope, you’re basically making my point.
> Unless you want to scan all chat messages for potential signs of adults engaging in grooming of children too?
Well, the point is to scan all messages, period.
And then to detect those that come from predators, not adults. How often do parents convince their children to send... revealing pictures? Or to meet them somewhere? How often do parents introduce themselves to their children in messages?
You can't seriously believe that a conversation between parents and children always looks like a conversation between a predator and children, can you?
Sure, but who's reading the conversation to determine whether it "looks suspicious"? A regex? A neural network? Who decides the algorithm, and do you really can believe they won't ever change it to serve other more nefarious purposes like suppressing dissent?
> Who decides the algorithm, and do you really can believe they won't ever change it to serve other more nefarious purposes like suppressing dissent?
YES. That's the problem. Whoever controls it has that power. We don't want that. That's the argument against ChatControl: "imagine that those who get in power are against you, and imagine what they can do if they abuse this tool".
But saying that "a law enforcement officer may see a false-positive between a parent and their child and I find this insufferable" won't convince many people, IMHO.
Dude, you're basically arguing that we should bring the equivalent of App Store review process to people's chat history. You know that automated and human reviews are an absolute nightmare for people to navigate and errors are made constantly and people complain about it loudly. And the plan here is to escalate that not to just whether or not your app gets published, but whether or not you can remain out of jail.
Seems like a terrible idea.
Dude, I am not arguing that at all, you should read before you answer.
I am saying this:
> You can't deny that if you can read all communications, then it's easier to detect CSAM than if you can't even see it.
I am against ChatControl, but people who say "it shouldn't exist because it is useless because it cannot ever help detect anything illegal" are wrong.
It doesn't help the cause to flood people with invalid arguments.