A few decades ago, few communications were tracked. When everything is tracked (as it is now), the only way to have privacy is with encryption.
A few decades ago, few communications were tracked. When everything is tracked (as it is now), the only way to have privacy is with encryption.
Snowden said otherwise, more than a decade ago.
Which part are you disputing?
The fact that ChatControl is killing democracies.
It's a tool that could be abused, but I wouldn't say that it is enough to kill a democracy all by itself.
To make a silly analogy: A stone in go has 4 liberties. Take away all four and the stone dies.
Chat control takes away one liberty from democracy.
Ask any half decent go player what will happen to that stone if we just ignore the attack upon it?
If they suspect that you own CSAM material, law enforcement will check your devices. Actually if they have convincing arguments, the way they get access to your devices may touch your physical integrity.
You don't have the liberty to avoid that, today. By design.
My point being that if one could prove that the ChatControl detection is only running locally and that it is only reporting what's acceptable to report, then it wouldn't hurt your freedom (except for your freedom to do illegal stuff, but that's the whole point).
The problem is that it is not possible to prove this. Fundamentally. We need to talk about that. Not throw some "it will kill democracy because you should trust me when I say it".
But chat control will have "master keys"to all communication.
That key will leak eventually, it's too juicy a target.
You describe chat control as if it's just an AI csam scanner that runs locally on your phone, like what apple did recently-ish.
Chat control is so much more than that, and so much worse
> But chat control will have "master keys"to all communication.
That's not my understanding. My understanding is that ChatControl will run client-side scanning and report what is deemed illegal.
This is not a master key to all communications.
Last time I checked, the preferred method was to simlify known CSAM material enough that you can hash the result, then repeat in the client end and hope nothing else has the same hash.
Which makes it even less of a problem than what people say. I see that as an argument in favour of ChatControl. If it really "just" compares hashes locally, then the claims that it breaks encryption is even more wrong.
But how is this supposed to protect children if say an abuser takes pictures or videos of the victim?
I don't see this as a valid argument. You can't say "I can find a situation where ChatControl does not help, therefore ChatControl is always useless".
On the contrary, it is an argument in favour of extending ChatControl to using machine learning for detecting such cases.
The problem, again, is that we don't want to have an opaque system that can be extended to surveilling new things, because it's very difficult to audit and make sure it is not abused.