>The authorities really think every predator will just give up and stop abusing just because of that? What a joke.
Yes, the framing is disingenuous, but so is yours. You're seriously suggesting that any policy that doesn't 100% eliminate a problem is a joke?
If the cost of the proposal is "let's throw democracy under the bus" as it is in this case, it better be damn close to 100% effective to be worth it!
I have a hard time imagining this will be more than 10% effective.
This proposal is a joke
It's going to be 0% effective. It won't take long for criminals to use their own encrypted communication systems, and only law-abiding citizens will be monitored.
And then you'll get into a scenario that the government will punish you for wrong-speak, like when people had their bank accounts frozen for donating to the trucker protests. Or they will turn off your access to social media the way the Biden Administration did during the Pandemic.
A few decades ago, all communications were unencrypted. Would you say that democracies did not exist then?
This is completely untrue! Important communications have always been enciphered since language has been created I’d wager, whether that cipher is specific terms (grog means attack that person in 10 seconds!) or a book cipher, e.i. The first letter of a bible verse than the second letter of the next verse etc. Humans have been encrypting communication since communication was possible.
It is now only recently possible to dragnet in mass many communications, store, and analyze them. The past decades have brought new threats to privacy democracy through breaking encryption at the state scale.
> Humans have been encrypting communication since communication was possible.
Were most people encrypting their handwritten letters? Were most people encrypting their messages before sending them by SMS or with WhatsApp? Really?
No, because there was an expectation of privacy. That expectation is no longer there.
Privacy from who? Law enforcement has been leveraging that forever.
But ChatControl won't prevent the encryption for anyone who is not the receiver of the reports. And the receiver is the equivalent of "law enforcement", right?
The scalability of spying has exploded. Back before re-election comms, the government had no way to spy on communications and sieve out opposers - now they do, with encryption the only thing standing in the way.
>Privacy from who? Law enforcement has been leveraging that forever.
Not without legal proceedings. The population would have been absolutely outraged if the government just decided to read all of their mail one random day in the 90s.
There's a reason the whole idea was supposed to be a conspiracy theory, the population literally didn't believe something like that could happen.
I think that there is a big difference, for the population, between "somebody is reading and keeping a copy of all your mail" and "Some algorithm looks for illegal material locally on your phone. If you don't have illegal material, it won't do anything".
Nobody would want to carry a microphone recording them 24/7 and storing everything on a server, but everybody is fine with TooBigTech simply promising that they don't store the data.
We have to accept that people are fine with the idea. The problem (both with the connected mic and ChatControl, btw) is that it can be abused. That's the problem. Again: we have to convince people that it is at risk of being abused. Not that they should be outraged. They just are not.
>TooBigTech simply promising that they don't store the data.
Instead they notify you that you gave them perpetual license to reuse your data.
Not most but some.
Are least where I'm from, there are pretty strong laws against reading snailmail post of others. To this day, any law enforcement that tries to open people's snail mail will laughed out of the courtroom, and quite possibly out of their jobs too!
Today nobody uses snail mail. This proposal is the equivalent of proposing to read everyone's private letters back in the day.
Technical details are technical details
A few decades ago, few communications were tracked. When everything is tracked (as it is now), the only way to have privacy is with encryption.
Snowden said otherwise, more than a decade ago.
Which part are you disputing?
The fact that ChatControl is killing democracies.
It's a tool that could be abused, but I wouldn't say that it is enough to kill a democracy all by itself.
To make a silly analogy: A stone in go has 4 liberties. Take away all four and the stone dies.
Chat control takes away one liberty from democracy.
Ask any half decent go player what will happen to that stone if we just ignore the attack upon it?
If they suspect that you own CSAM material, law enforcement will check your devices. Actually if they have convincing arguments, the way they get access to your devices may touch your physical integrity.
You don't have the liberty to avoid that, today. By design.
My point being that if one could prove that the ChatControl detection is only running locally and that it is only reporting what's acceptable to report, then it wouldn't hurt your freedom (except for your freedom to do illegal stuff, but that's the whole point).
The problem is that it is not possible to prove this. Fundamentally. We need to talk about that. Not throw some "it will kill democracy because you should trust me when I say it".
But chat control will have "master keys"to all communication.
That key will leak eventually, it's too juicy a target.
You describe chat control as if it's just an AI csam scanner that runs locally on your phone, like what apple did recently-ish.
Chat control is so much more than that, and so much worse
> But chat control will have "master keys"to all communication.
That's not my understanding. My understanding is that ChatControl will run client-side scanning and report what is deemed illegal.
This is not a master key to all communications.
Last time I checked, the preferred method was to simlify known CSAM material enough that you can hash the result, then repeat in the client end and hope nothing else has the same hash.
Which makes it even less of a problem than what people say. I see that as an argument in favour of ChatControl. If it really "just" compares hashes locally, then the claims that it breaks encryption is even more wrong.
But how is this supposed to protect children if say an abuser takes pictures or videos of the victim?
I don't see this as a valid argument. You can't say "I can find a situation where ChatControl does not help, therefore ChatControl is always useless".
On the contrary, it is an argument in favour of extending ChatControl to using machine learning for detecting such cases.
The problem, again, is that we don't want to have an opaque system that can be extended to surveilling new things, because it's very difficult to audit and make sure it is not abused.
Encryption is not the only privacy assurance that exists in democracies. For example, the government is (or at least was[1]) not allowed to open your mail. You could send CSAM Polaroids back and forth and nothing would happen.
Chat Control amounts to routine, warrantless interception of private communication. Something you see in states like the USSR.
[1] https://www.westernstandard.news/news/liberals-push-bill-to-...
Well, what is "the problem"? Is it children being abused, or is it the distribution of CSAM?
And if you say both - how would you rate the relative severity of the two problems? Specifically, if you had to pick between preventing the rape of a child, and preventing N acts of CSAM distribution, how big would N have to be to make it worth choosing the latter?
I don't think they care what N is, they are just scapegoating a vile group they know will have no defenders, and they can use it to silence the critics by associating them with that group.
Bingo.
Today its the pedophiles and 15-17-philes (those are this fake group adolescent, which are also tried as adults when convenient).
Tomorrow, its the adult sex workers.
Then its the fringe group's topics that is on the outs with the majority.
Then they come for you, and nobody is able to speak up because they banned protests.
... To paraphrase Martin Niemoller.
> Well, what is "the problem"? Is it children being abused, or is it the distribution of CSAM?
It seems obvious that it is entirely the former and not at all the latter. In other words, N is positive infinity. Am I missing something?
I only care about kids being hurt. And I think this view is close to consensus.
Ask anyone you know who has been sexually assaulted or raped what they think of the idea of pictures or recordings of that being both kept by the perpetrator and widely disseminated. I think you'll find very few who'd say that's totally fine. But given that there can be no CSAM without child abuse, the direct physical abuse is clearly the primary problem.
What do you think it would be for you?
What's worse for you? Being raped as a child. Or, having people sexually gratify themselves looking at images of you being abused; using those images to groom other children, or to trade and encourage the rape of other children?
You might as well ask someone which eye they prefer to have gouged out with a blunt screw.
Let's do both: try to stop child sexual abuse and try to stop images of abused children being used by abusers.
> You're seriously suggesting that any policy that doesn't 100% eliminate a problem is a joke?
I think a more charitable reading is that any policy that doesn't 100% _target_ a problem is a joke. This policy doesn't have a plausible way that it will protect children from being victimized, so I think it's reasonable to remove the "think of the children" cloak it's wearing and assess it on the merits of whether encryption is beneficial for the social discourse of a society.
> This policy doesn't have a plausible way that it will protect children from being victimized
Of course it does. "It will detect and report messages from predators to children, therefore preventing the child to get to the point where they send revealing pictures or meet the predator in person". Done.
Well, maybe the word "plausible" is doing too much work in my statement.
Most abuse happens from people known to the child, and of that portion, most are family members. It seems like there is sufficient opportunity in-person comms to route around this limitation.
Moreover, even the communications that do happen online can still easily happen through encrypted media; presumably the perpetrators will simply move to other ways of communicating. And kids, at least kids over 10 or so, don't seem like a demographic particularly likely to follow this law anyhow.
There's another nuance worth considering: by and large, parents _want_ their kids to have access to encrypted communications. I'll happily assist my kiddo in maintaining good opsec - that's much more important to me than some silly and uninformed policy decision being made far away by people I've never met.
https://web.archive.org/web/20210522003136/https://blog.nucy...
So, the kids are still going to be where the encrypted comms are. I still think it's reasonable to say that the protections offered to kids by criminalizing encryption are implausible.
> Most abuse happens from people known to the child
Sure, but it means that at least some happen from people unknown to the child. If ChatControl doesn't cause any problem but helps preventing those abuses, then it's worth it. The question is: what are the problems caused by ChatControl?
Saying "only a minority of children get abused this way, so it's not worth it" won't go far, IMO. It's not a valid argument against ChatControl in itself.
> presumably the perpetrators will simply move to other ways of communicating.
The perpetrators have to contact kids over apps that the kids use. Like Snapchat or TikTok. It's not like the kids will routinely install a weird app to talk to weird people...
> parents _want_ their kids to have access to encrypted communications.
But ChatControl doesn't remove the encryption! It scans everything locally, before it gets encrypted and sent.
> by criminalizing encryption
It's not criminalizing encryption: it's forcing a local scan on your device. Just like there are already scans happening on clouds for non-E2EE data.
Don't get me wrong: I am against ChatControl. For me the problem is that I see a potential for abuse with the "list" (whether it's a list or a sum of weights) of illegal material. This list cannot be made public (because it's highly illegal material), so it's hard to audit. So whoever has control over it can abuse it, e.g. to find political opponents. That's my problem with ChatControl.
Wow, the only prosaic take in this entire dumpster-fire of a thread.
Can you point me to the doc that says all this?
Admittedly I didn't yet bother looking for the official ruling since those are massive, and media sites were just politicising it.
I don't have a doc to point to, it's just my understanding of ChatControl :-).