This has nothing to do with csam and arguing that point is on purpose, to distract people and the politicians can say “xp84 supports child pornography!”
It has everything to do with censorship and complete control over people’s ability to communicate. Politicians hate free speech and they want to control their citizens completely including their thoughts. This is true evil.
But politicians are - in general - neither evil, nor do they have any real incentive to ”control citizens’ thoughts”. It doesn’t make sense. They can be gullible. Non-Technical. Owned by lobbyists. Under pressure to deliver on the apparent problem of the day (csam, terror, whatever). But I don’t think there is a general crusade against privacy. That’s why I think it’s so infuriating: I’m sure it’s not even deliberately dismantling privacy. They’re doing it blindly.
This is pushed by parties that have a good track record of preserving integrity. That’s why it’s so surprising.
If they are "just doing their job" why are they asking for an exemption that would apply only to them? No, they firmly believe that safety should be gained at the cost of privacy.
I could imagine that war orders may be interpreted as "illegal" and therefore reported. Which may not be desirable?
So it's ok if the database containing my nudes leaks, but not if it contains state secrets? I feel really protected!
State secrets are to governments as private keys are to software engineers, except it's much slower to change meatspace things like (to make up a fictional example) the gaps your military found in their CIWS naval defence system, which if leaked means your enemies now know know how to exploit in order to wipe out your navy.
Not saying that I agree, just saying that I can imagine it's not done in bad faith.
It's totally done in bad faith, corruption is a real thing in EU politics.
It may cause corruption, because despite lawmaker's attempts to carve out security*- and governance-critical communications, it's almost impossible for this tech to fail to open doors to blackmailers.
But existing corruption is neither necessary nor sufficient for what we see here. Wrong axis.
EU is (mostly, and relatively speaking) un-corrupt as governments go; more corrupt places (and also authoritarian places) will write fantastic laws that they just straight-up ignore.
* Which won't work anyway: consider that the US military had to issue statements and bans because fitbit was revealing too much about military bases.
> But politicians are - in general - neither evil, nor do they have any real incentive to ”control citizens’ thoughts”.
As someone coming from authoritarian state, this is such an alien line of reasoning to me. By definition, those in power want more power. The more control over the people you have, the more power you get. Ergo, you always want more control.
It's easy to overlook this if you've spent your entire life in a democratic country, as democracies have power dynamics that obscure this goal, making it less of a priority for politicians. For instance, attempting to seize too much power can backfire, giving political opponents leverage against you. However, the closer a system drifts toward autocracy and the fewer constraints on power there are, the more achievable this goal becomes and the more likely politicians are to pursue it.
Oh, and also politics selects for psychopaths who are known for their desire for control.
> By definition, those in power want more power.
This is not what 'by definition' means.
> I’m sure it’s not even deliberately dismantling privacy. They’re doing it blindly.
That is often a variant of Hanlor's razor brought up on questions like this. How do certain actors turn reliably to a course of action that is so damaging - but to any expert or even rational mind seems stupid! That can't be what they want?!
I do not think that this reasoning holds.
Hannah Arendt, when writing about totalitarism, came to the conclusion that there is a kind of complicity between evil and thoughtlessness. (I am still trying to find her exact words on this.)
> I’m sure it’s not even deliberately dismantling privacy.
But it is not even dismantling privacy. ChatControl would run client-side and only report what's deemed illegal. Almost all communications are legal, and almost all of the legal communications wouldn't be reported to anyone at all. They would stay private.
The problem I see is that the "client-side scanner" has to be opaque to some extent: it's fundamentally impossible to have an open source list of illegal material without sharing the illegal material itself. Meaning that whoever controls that list can abuse it. E.g. by making the scanner report political opponents.
This is a real risk, and the reason I am against ChatControl.
But it isn't dismantling privacy per se.
EDIT: I find it amazing how much I can be downvoted for saying that I am against ChatControl, but that argument X or Y against it is invalid. Do we want an echo chamber to complain about the principle, or do we want to talk about what is actually wrong with ChatControl?
It's nice to say "those politicians are morons who don't understand how it works", but one should be careful to understand it themselves.
It's a mechanism where the Governments give lists of un-auditable hashes to chat operators and force them to send the content of messages to them when they match.
You can't for a second imagine how that could possibly go wrong?
The hashes are "only for what's deemed illegal" because just trust me bro. There won't be any false-positives because just trust me bro. Even if you do believe the Governments are fully trustworthy and don't care about false positives of your own personal images or messages being passed around law enforcement, most systems like this have usually also eventually been compromised by other parties (like CALEA which was used by China and other adversaries for years). Even if you fully trust the Government, we can't actually be sure the un-auditable list of hashes are only from them, or whether adversaries have compromised the system to add their own hashes. And we don't know that the mechanism that sends our private messages (from a real match, or a false-positive, or an adversarial added hash) are only going to authorised parties or if somebody else has managed to add themselves in (like has happened with "lawful intercept" systems in the past).
So even when claiming it's only for combating the most heinous crimes, the system is too dangerous.
> You can't for a second imagine how that could possibly go wrong?
I can, and that is why I am against ChatControl. But many many comments here say stuff like "it breaks encryption", and that's not actually what it does.
The debate should focus on what ChatControl cannot solve, not on details it could solve. If you spend your day complaining about something that can be solved, politicians will (rightfully) say "I hear you, don't worry we will solve that".
> There won't be any false-positives because just trust me bro.
"There will be false-positives, but they won't have any impact on you". You find it invasive? 99% of people don't care, they already give all their data to private companies like TooBigTech.
> whether adversaries have compromised the system to add their own hashes.
So what? Legal material gets reported to the authorities, they see it's legal, and they realise that the list has been tampered with.
> And we don't know that the mechanism that sends our private messages
"Don't worry, we will make the code open source, you'll be able to audit it!"
> The hashes are "only for what's deemed illegal" because just trust me bro.
YES. That's my problem with it. It can be abused because fundamentally we cannot audit those hashes. We don't want to create a system that gives that power to whoever controls it.
[flagged]
Actually I deny that. What proof is there that peddlers of child pornography are using chat to distribute CSAM? What proof is there that they won’t simply move to another undocumented form of distribution once this is implemented, leaving the criminals unmonitored but every law-abiding citizen monitored?
> What proof is there that peddlers of child pornography are using chat to distribute CSAM?
Are you kidding me? How do you think predators get in contact with children? Over social media that children use, obviously.
And of course many criminals use chat. Most have no clue about encryption, like the vast majority of humans.
> How do you think predators get in contact with children?
I thought it was pretty common knowledge that the vast, vast majority of the perpetrators of these offenses are either family members, or known and trusted people to the family, such as the friends of an older sibling, friends of parents, stepparents, teachers, priests, etc. The bogeyman of the evil stranger paedo lurking on social media cold-calling random kids is an edge case.
Have you really never seen those groups of teenagers who lure a predator to meet a kid somewhere (they do that over mainstream social media) and then beat and humiliate the said predator?
I thought it was in fashion. Happens where I live.
“Distribute” not “contact”. Unless you want to scan all chat messages for potential signs of adults engaging in grooming of children too? Talk about a slippery slope, you’re basically making my point.
> Unless you want to scan all chat messages for potential signs of adults engaging in grooming of children too?
Well, the point is to scan all messages, period.
And then to detect those that come from predators, not adults. How often do parents convince their children to send... revealing pictures? Or to meet them somewhere? How often do parents introduce themselves to their children in messages?
You can't seriously believe that a conversation between parents and children always looks like a conversation between a predator and children, can you?
Sure, but who's reading the conversation to determine whether it "looks suspicious"? A regex? A neural network? Who decides the algorithm, and do you really can believe they won't ever change it to serve other more nefarious purposes like suppressing dissent?
> Who decides the algorithm, and do you really can believe they won't ever change it to serve other more nefarious purposes like suppressing dissent?
YES. That's the problem. Whoever controls it has that power. We don't want that. That's the argument against ChatControl: "imagine that those who get in power are against you, and imagine what they can do if they abuse this tool".
But saying that "a law enforcement officer may see a false-positive between a parent and their child and I find this insufferable" won't convince many people, IMHO.
Dude, you're basically arguing that we should bring the equivalent of App Store review process to people's chat history. You know that automated and human reviews are an absolute nightmare for people to navigate and errors are made constantly and people complain about it loudly. And the plan here is to escalate that not to just whether or not your app gets published, but whether or not you can remain out of jail.
Seems like a terrible idea.
Dude, I am not arguing that at all, you should read before you answer.
I am saying this:
> You can't deny that if you can read all communications, then it's easier to detect CSAM than if you can't even see it.
I am against ChatControl, but people who say "it shouldn't exist because it is useless because it cannot ever help detect anything illegal" are wrong.
It doesn't help the cause to flood people with invalid arguments.
Criminals are reactive. If you add a CCTV where drug dealing happens, sellers and buyers will go to another place. In the end, nothing changes.
Not all of them are. Actually CCTVs catch some of them. Tapping their phones as well.
Sounds like a non-sequitur to me. Yes, this is how it works - in every aspect of life we try to regulate as a society (through laws and enforcement). Criminal activities are prohibited, restricted and monitored. Criminals move on, but the law is also adaptive and soon catches on. Following your argument leads to anarchy and currently western society mostly shares the belief that we're better off with democracy.
No,the logical conclusion is that, on the contrary, liberal democracies end up as authoritarian legalist regimes, as they progressively need more and more laws and enforcement to catch criminals who evade them.
You can see this clearly with the constant inflation of AML laws in the EU, which become more and more restrictive and invasive each year, without any clear effect.
The US does not feel like they are into adding regulations, would you say they are less likely to end up authoritarian?
There are multiple ways to end up an authoritarian State. You can add many regulations, then have someone come to power and use those regulations to repress society (a good example would be Germany in 1937), or have a weakening of the institutions protecting citizen's rights, which is something the US is experiencing.
It has started before Trump, I think that a turning point was the Patriot Act, but Democrats didn't overturn it and picked their ennemies, too[0].
[0] https://en.m.wikipedia.org/wiki/Operation_Choke_Point
> if you can read all communications
But you can't. You can legally require messages to implement client side scanning before the encryption happens or add some backdoor keys and hope they don't leak.
Since cryptography is known anybody can always just create messengers that just don't implement that requirement. If you sexually abuse children the hurdle to illegally running a non-backdoored messenger seems pretty low to me.
Thats like fighting illegal street races by implementing electronically-GPS-enforced tempolimits for everybody. You won't catch the people it is meant to catch. Only that surveilling your entire population minus the criminals has dire consequences for a free society.
> But you can't.
Be careful what you wish for. We could enforce client-side scanning on the OS. Everything that appears on the screen is scanned and reported.
> If you sexually abuse children the hurdle to illegally running a non-backdoored messenger seems pretty low to me.
How do you contact children on mainstream messengers if you can't use mainstream messengers?
Not to mention that most people consuming CSAM are not technically savvy. It is known that such material goes through mainstream messengers.
> Only that surveilling your entire population minus the criminals has dire consequences for a free society.
Again: I am against ChatControl. We fundamentally cannot know what is running in this client side scanner, and that makes it a risk.
But saying that it won't ever catch a single criminal is not a valid argument. You won't convince politicians to vote against ChatControl with such claims.
> Not to mention that most people consuming CSAM are not technically savvy. It is known that such material goes through mainstream messengers.
The reason is because it works. They're not stupid - they can use signal.
The reality is that the privacy options not only exist, they're really good - often better and easier to use than the mainstream stuff.
They will just pivot to other tools.
> How do you contact children on mainstream messengers if you can't use mainstream messengers?
The scanning just doesn't include contacting children - it includes CSAM. Talking to kids isn't CSAM. You're talking about something else altogether, and something which is purely hypothetical.
> The scanning just doesn't include contacting children - it includes CSAM.
My understanding is that they are not only talking about having a list of hashes (of illegal images), but also having some kind of machine learning. And they are not only talking about scanning images, but text, too.
I don't know what you expect them to report when scanning conversations with machine learning?
> Be careful what you wish for. We could enforce client-side scanning on the OS. Everything that appears on the screen is scanned and reported.
Nope. Criminals can still just build their own devices with their own operating systems. We have existing OS without spying, people have them as ISO on their harddrives. You can't stop criminals from installing an old Lineage OS build.
Legally you can't stop anybody from using encrypted channels if they are motivate unless you go out and erase the knowledge.
Implementing filters that go on all communications is undemocratic. Any future authoritarian government can use the same filter to immediately target their opposition, not just in theory, in practise. We have designed our democracies with division of powers for the simple reason that we have learned through history that giving any single kind of entity that kind of power leads to tyranny. That means whenever we give the government new superpowers we are changing a carefully setup balance. What does the citizen get for that trade? Nothing. If your power only works as long as it is in the hands of the good ones and becomes dystopian once it gets into the hands of the bad guys, maybe that power shouldn't exist.
Since we want to obviously prevent childrem from being sexually abused the best way to start according to most child-protection organizations is to start at the root. That means educating kids early on in child-suitable ways and investing in prevention and psuchological help for potential pedophiles. If children have the possibility to signal such a thing happened to adults or other persons of trust, you don't need mass surveillance.
But my guess is that CSAM is just the symbolic reason, in reality this is meant to do more. It would be perfect to target small movements as they are emerging for example.
> Nope. Criminals can still just build their own devices
Haha sure. You over-estimate many of them. Not everyone breaking the law is a professional criminal. Especially when it comes to CSAM.
> Legally you can't stop anybody from using encrypted channels if they are motivate unless you go out and erase the knowledge.
ChatControl doesn't pretend to do this: they pretend to control mainstream social media, whether encrypted or not.
> It would be perfect to target small movements as they are emerging for example.
Yes, this is my problem with ChatControl: it's a powerful tool of surveillance that would be very dangerous if abused. We don't want to create that.
But "it's useless because criminals will compile their own kernel anyway" is an invalid argument. It doesn't help the cause.