> I’m sure it’s not even deliberately dismantling privacy.
But it is not even dismantling privacy. ChatControl would run client-side and only report what's deemed illegal. Almost all communications are legal, and almost all of the legal communications wouldn't be reported to anyone at all. They would stay private.
The problem I see is that the "client-side scanner" has to be opaque to some extent: it's fundamentally impossible to have an open source list of illegal material without sharing the illegal material itself. Meaning that whoever controls that list can abuse it. E.g. by making the scanner report political opponents.
This is a real risk, and the reason I am against ChatControl.
But it isn't dismantling privacy per se.
EDIT: I find it amazing how much I can be downvoted for saying that I am against ChatControl, but that argument X or Y against it is invalid. Do we want an echo chamber to complain about the principle, or do we want to talk about what is actually wrong with ChatControl?
It's nice to say "those politicians are morons who don't understand how it works", but one should be careful to understand it themselves.
It's a mechanism where the Governments give lists of un-auditable hashes to chat operators and force them to send the content of messages to them when they match.
You can't for a second imagine how that could possibly go wrong?
The hashes are "only for what's deemed illegal" because just trust me bro. There won't be any false-positives because just trust me bro. Even if you do believe the Governments are fully trustworthy and don't care about false positives of your own personal images or messages being passed around law enforcement, most systems like this have usually also eventually been compromised by other parties (like CALEA which was used by China and other adversaries for years). Even if you fully trust the Government, we can't actually be sure the un-auditable list of hashes are only from them, or whether adversaries have compromised the system to add their own hashes. And we don't know that the mechanism that sends our private messages (from a real match, or a false-positive, or an adversarial added hash) are only going to authorised parties or if somebody else has managed to add themselves in (like has happened with "lawful intercept" systems in the past).
So even when claiming it's only for combating the most heinous crimes, the system is too dangerous.
> You can't for a second imagine how that could possibly go wrong?
I can, and that is why I am against ChatControl. But many many comments here say stuff like "it breaks encryption", and that's not actually what it does.
The debate should focus on what ChatControl cannot solve, not on details it could solve. If you spend your day complaining about something that can be solved, politicians will (rightfully) say "I hear you, don't worry we will solve that".
> There won't be any false-positives because just trust me bro.
"There will be false-positives, but they won't have any impact on you". You find it invasive? 99% of people don't care, they already give all their data to private companies like TooBigTech.
> whether adversaries have compromised the system to add their own hashes.
So what? Legal material gets reported to the authorities, they see it's legal, and they realise that the list has been tampered with.
> And we don't know that the mechanism that sends our private messages
"Don't worry, we will make the code open source, you'll be able to audit it!"
> The hashes are "only for what's deemed illegal" because just trust me bro.
YES. That's my problem with it. It can be abused because fundamentally we cannot audit those hashes. We don't want to create a system that gives that power to whoever controls it.