Most arguments I see against ChatControl sound like bullshit to me. How do we expect to convince anyone to go against ChatControl with those?
I feel unease when it comes to ChatControl; I don't want my devices to run proprietary, opaque algorithms on all my data. And it feels like it fundamentally has to be opaque: nobody can't publish an open source list of illegal material together with their hash (precisely because it is illegal). That is why I don't want ChatControl: I would want someone to formally prove that it cannot be abused, just because of what it means. The classic example being: what happens if someone in power decides to use this system to track their opponents?
But most comments and most articles talk about anything but that, with honestly weird, unsupported claims:
> It's the end of encryption
How so? What appears on my screen is not encrypted and will never be encrypted, because I need to read it. We all decrypt our messages to read them, and we all write them unencrypted before we send them.
> It won't fight CSAM
Who are you kidding? Of course it will. It will not solve the problem entirely, but it will be pretty damn efficient at detecting CSAM when CSAM is present in the data being scanned.
> With ChatControl, every message gets automatically checked, assuming everyone is guilty until proven innocent and effectively reversing the presumption of innocence.
When you board a plane, you're searched. When you enter a concert hall, you're search. Nobody would say "you should let me board the plane with whatever I put in my bag, because I'm presumed innocent".
> While your messages still get encrypted during transmission, the system defeats the purpose of end-to-end encryption by examining your content before it gets encrypted.
Before it gets encrypted, it is not encrypted. So the system is not breaking the encryption. If (and that's a big if) this system was open source, such that anyone could check what code it is running and prove that the system is not being abused, then it would be perfectly fine. The problem is that we cannot know what the system does. But that's a different point (and one of the only valid arguments against ChatControl).
> Proton point out this approach might be worse than encryption backdoors. Backdoors give authorities access to communications you share with others. This system examines everything on your device, whether you share it or not.
How is it worse? Backdoors give access to communications, this system (on the paper) does not. This system is better, unless we admit that we can't easily audit what the system is doing exactly. Which again is the one valid argument against ChatControl.
> The regulation also pushes for mandatory age verification systems. No viable, privacy-respecting age verification technology currently exists. These systems would eliminate online anonymity, requiring users to prove their identity to access digital services.
This is plain wrong. There are ways to do age verification anonymously, period.
> Police resources would be overwhelmed investigating innocent families sharing vacation photos while real crimes go uninvestigated.
How to say you don't know how the police works without saying you don't know how the police works? Anyway, that's the problem of the police.
> Google’s algorithms flagged this legitimate medical consultation as potential abuse, permanently closed his account and refused all appeals.
The problem is the closing and refusing of appeals.
> The letter emphasizes that client-side scanning cannot distinguish between legal and illegal content without fundamentally breaking encryption and creating vulnerabilities that malicious actors can exploit.
Then explain how? How is it fundamentally breaking encryption and creating vulnerabilities? Stop using bad arguments. If you have actual reasons to go against ChatControl, talk about those. You won't win with the bullshit, invalid arguments.
> ChatControl catches only amateur criminals who directly attach problematic content to messages.
Yep, that's an argument in favour of ChatControl: it does catch some criminals. How many criminals are professionals? Do you want to make it legal to be an amateur criminal?
Don't get me wrong: I am against ChatControl. Because of one argument I believe to be valid: we fundamentally cannot know what the algorithm doing the scanning is doing, so those who control it could abuse it. Of all the discussions I have seen against ChatControl, I haven't seen another valid argument. But this one is enough.
Stop saying bullshit, start using the valid arguments. And maybe politicians will hear them.
Thanks for your feedback. You’ve raised some interesting points, I’ll take them into account and try to update some of my arguments.
I didn't expect to be read by the author of the article!
Just to be clear: thanks for talking about ChatControl and for bringing visibility to the topic! And I am on your side!
> Don't get me wrong: I am against ChatControl. Because of one argument I believe to be valid: we fundamentally cannot know what the algorithm doing the scanning is doing, so those who control it could abuse it. Of all the discussions I have seen against ChatControl, I haven't seen another valid argument. But this one is enough.
It is not enough to know what the algorithm is doing. It also needs to be possible (for the average user as well) to stop it from doing reprehensible things. If a client-side scanning algorithm is actually searching for e. g. political content, it is possible to detect it via reverse engineering, but merely knowing it won't solve the problem, but instead lead into self-censorship.
[dead]