From the article, the current flavor of "threat" this is being positioned to fight is CSAM.
Does anyone believe that predators commit those heinous offenses because of the availability of encrypted channels to distribute those products of their crimes? I sure don't. The materials exist because of predators' access to children, which these surveillance measures won't solve.
Best case scenario (and this is wildly optimistic) the offenders won't be able to find any 'safe' channels to distribute their materials to each other. The authorities really think every predator will just give up and stop abusing just because of that? What a joke.
More likely of course, those criminals will just use decentralized tools that can't be suppressed or monitored, even as simple as plain old GPG and email. Therefore nothing of value will be gained from removing all privacy from all communication.
This has nothing to do with csam and arguing that point is on purpose, to distract people and the politicians can say “xp84 supports child pornography!”
It has everything to do with censorship and complete control over people’s ability to communicate. Politicians hate free speech and they want to control their citizens completely including their thoughts. This is true evil.
But politicians are - in general - neither evil, nor do they have any real incentive to ”control citizens’ thoughts”. It doesn’t make sense. They can be gullible. Non-Technical. Owned by lobbyists. Under pressure to deliver on the apparent problem of the day (csam, terror, whatever). But I don’t think there is a general crusade against privacy. That’s why I think it’s so infuriating: I’m sure it’s not even deliberately dismantling privacy. They’re doing it blindly.
This is pushed by parties that have a good track record of preserving integrity. That’s why it’s so surprising.
If they are "just doing their job" why are they asking for an exemption that would apply only to them? No, they firmly believe that safety should be gained at the cost of privacy.
I could imagine that war orders may be interpreted as "illegal" and therefore reported. Which may not be desirable?
So it's ok if the database containing my nudes leaks, but not if it contains state secrets? I feel really protected!
State secrets are to governments as private keys are to software engineers, except it's much slower to change meatspace things like (to make up a fictional example) the gaps your military found in their CIWS naval defence system, which if leaked means your enemies now know know how to exploit in order to wipe out your navy.
Not saying that I agree, just saying that I can imagine it's not done in bad faith.
It's totally done in bad faith, corruption is a real thing in EU politics.
It may cause corruption, because despite lawmaker's attempts to carve out security*- and governance-critical communications, it's almost impossible for this tech to fail to open doors to blackmailers.
But existing corruption is neither necessary nor sufficient for what we see here. Wrong axis.
EU is (mostly, and relatively speaking) un-corrupt as governments go; more corrupt places (and also authoritarian places) will write fantastic laws that they just straight-up ignore.
* Which won't work anyway: consider that the US military had to issue statements and bans because fitbit was revealing too much about military bases.
> But politicians are - in general - neither evil, nor do they have any real incentive to ”control citizens’ thoughts”.
As someone coming from authoritarian state, this is such an alien line of reasoning to me. By definition, those in power want more power. The more control over the people you have, the more power you get. Ergo, you always want more control.
It's easy to overlook this if you've spent your entire life in a democratic country, as democracies have power dynamics that obscure this goal, making it less of a priority for politicians. For instance, attempting to seize too much power can backfire, giving political opponents leverage against you. However, the closer a system drifts toward autocracy and the fewer constraints on power there are, the more achievable this goal becomes and the more likely politicians are to pursue it.
Oh, and also politics selects for psychopaths who are known for their desire for control.
> By definition, those in power want more power.
This is not what 'by definition' means.
> I’m sure it’s not even deliberately dismantling privacy. They’re doing it blindly.
That is often a variant of Hanlor's razor brought up on questions like this. How do certain actors turn reliably to a course of action that is so damaging - but to any expert or even rational mind seems stupid! That can't be what they want?!
I do not think that this reasoning holds.
Hannah Arendt, when writing about totalitarism, came to the conclusion that there is a kind of complicity between evil and thoughtlessness. (I am still trying to find her exact words on this.)
> I’m sure it’s not even deliberately dismantling privacy.
But it is not even dismantling privacy. ChatControl would run client-side and only report what's deemed illegal. Almost all communications are legal, and almost all of the legal communications wouldn't be reported to anyone at all. They would stay private.
The problem I see is that the "client-side scanner" has to be opaque to some extent: it's fundamentally impossible to have an open source list of illegal material without sharing the illegal material itself. Meaning that whoever controls that list can abuse it. E.g. by making the scanner report political opponents.
This is a real risk, and the reason I am against ChatControl.
But it isn't dismantling privacy per se.
EDIT: I find it amazing how much I can be downvoted for saying that I am against ChatControl, but that argument X or Y against it is invalid. Do we want an echo chamber to complain about the principle, or do we want to talk about what is actually wrong with ChatControl?
It's nice to say "those politicians are morons who don't understand how it works", but one should be careful to understand it themselves.
It's a mechanism where the Governments give lists of un-auditable hashes to chat operators and force them to send the content of messages to them when they match.
You can't for a second imagine how that could possibly go wrong?
The hashes are "only for what's deemed illegal" because just trust me bro. There won't be any false-positives because just trust me bro. Even if you do believe the Governments are fully trustworthy and don't care about false positives of your own personal images or messages being passed around law enforcement, most systems like this have usually also eventually been compromised by other parties (like CALEA which was used by China and other adversaries for years). Even if you fully trust the Government, we can't actually be sure the un-auditable list of hashes are only from them, or whether adversaries have compromised the system to add their own hashes. And we don't know that the mechanism that sends our private messages (from a real match, or a false-positive, or an adversarial added hash) are only going to authorised parties or if somebody else has managed to add themselves in (like has happened with "lawful intercept" systems in the past).
So even when claiming it's only for combating the most heinous crimes, the system is too dangerous.
> You can't for a second imagine how that could possibly go wrong?
I can, and that is why I am against ChatControl. But many many comments here say stuff like "it breaks encryption", and that's not actually what it does.
The debate should focus on what ChatControl cannot solve, not on details it could solve. If you spend your day complaining about something that can be solved, politicians will (rightfully) say "I hear you, don't worry we will solve that".
> There won't be any false-positives because just trust me bro.
"There will be false-positives, but they won't have any impact on you". You find it invasive? 99% of people don't care, they already give all their data to private companies like TooBigTech.
> whether adversaries have compromised the system to add their own hashes.
So what? Legal material gets reported to the authorities, they see it's legal, and they realise that the list has been tampered with.
> And we don't know that the mechanism that sends our private messages
"Don't worry, we will make the code open source, you'll be able to audit it!"
> The hashes are "only for what's deemed illegal" because just trust me bro.
YES. That's my problem with it. It can be abused because fundamentally we cannot audit those hashes. We don't want to create a system that gives that power to whoever controls it.
[flagged]
Actually I deny that. What proof is there that peddlers of child pornography are using chat to distribute CSAM? What proof is there that they won’t simply move to another undocumented form of distribution once this is implemented, leaving the criminals unmonitored but every law-abiding citizen monitored?
> What proof is there that peddlers of child pornography are using chat to distribute CSAM?
Are you kidding me? How do you think predators get in contact with children? Over social media that children use, obviously.
And of course many criminals use chat. Most have no clue about encryption, like the vast majority of humans.
> How do you think predators get in contact with children?
I thought it was pretty common knowledge that the vast, vast majority of the perpetrators of these offenses are either family members, or known and trusted people to the family, such as the friends of an older sibling, friends of parents, stepparents, teachers, priests, etc. The bogeyman of the evil stranger paedo lurking on social media cold-calling random kids is an edge case.
Have you really never seen those groups of teenagers who lure a predator to meet a kid somewhere (they do that over mainstream social media) and then beat and humiliate the said predator?
I thought it was in fashion. Happens where I live.
“Distribute” not “contact”. Unless you want to scan all chat messages for potential signs of adults engaging in grooming of children too? Talk about a slippery slope, you’re basically making my point.
> Unless you want to scan all chat messages for potential signs of adults engaging in grooming of children too?
Well, the point is to scan all messages, period.
And then to detect those that come from predators, not adults. How often do parents convince their children to send... revealing pictures? Or to meet them somewhere? How often do parents introduce themselves to their children in messages?
You can't seriously believe that a conversation between parents and children always looks like a conversation between a predator and children, can you?
Sure, but who's reading the conversation to determine whether it "looks suspicious"? A regex? A neural network? Who decides the algorithm, and do you really can believe they won't ever change it to serve other more nefarious purposes like suppressing dissent?
> Who decides the algorithm, and do you really can believe they won't ever change it to serve other more nefarious purposes like suppressing dissent?
YES. That's the problem. Whoever controls it has that power. We don't want that. That's the argument against ChatControl: "imagine that those who get in power are against you, and imagine what they can do if they abuse this tool".
But saying that "a law enforcement officer may see a false-positive between a parent and their child and I find this insufferable" won't convince many people, IMHO.
Dude, you're basically arguing that we should bring the equivalent of App Store review process to people's chat history. You know that automated and human reviews are an absolute nightmare for people to navigate and errors are made constantly and people complain about it loudly. And the plan here is to escalate that not to just whether or not your app gets published, but whether or not you can remain out of jail.
Seems like a terrible idea.
Dude, I am not arguing that at all, you should read before you answer.
I am saying this:
> You can't deny that if you can read all communications, then it's easier to detect CSAM than if you can't even see it.
I am against ChatControl, but people who say "it shouldn't exist because it is useless because it cannot ever help detect anything illegal" are wrong.
It doesn't help the cause to flood people with invalid arguments.
Criminals are reactive. If you add a CCTV where drug dealing happens, sellers and buyers will go to another place. In the end, nothing changes.
Not all of them are. Actually CCTVs catch some of them. Tapping their phones as well.
Sounds like a non-sequitur to me. Yes, this is how it works - in every aspect of life we try to regulate as a society (through laws and enforcement). Criminal activities are prohibited, restricted and monitored. Criminals move on, but the law is also adaptive and soon catches on. Following your argument leads to anarchy and currently western society mostly shares the belief that we're better off with democracy.
No,the logical conclusion is that, on the contrary, liberal democracies end up as authoritarian legalist regimes, as they progressively need more and more laws and enforcement to catch criminals who evade them.
You can see this clearly with the constant inflation of AML laws in the EU, which become more and more restrictive and invasive each year, without any clear effect.
The US does not feel like they are into adding regulations, would you say they are less likely to end up authoritarian?
There are multiple ways to end up an authoritarian State. You can add many regulations, then have someone come to power and use those regulations to repress society (a good example would be Germany in 1937), or have a weakening of the institutions protecting citizen's rights, which is something the US is experiencing.
It has started before Trump, I think that a turning point was the Patriot Act, but Democrats didn't overturn it and picked their ennemies, too[0].
[0] https://en.m.wikipedia.org/wiki/Operation_Choke_Point
> if you can read all communications
But you can't. You can legally require messages to implement client side scanning before the encryption happens or add some backdoor keys and hope they don't leak.
Since cryptography is known anybody can always just create messengers that just don't implement that requirement. If you sexually abuse children the hurdle to illegally running a non-backdoored messenger seems pretty low to me.
Thats like fighting illegal street races by implementing electronically-GPS-enforced tempolimits for everybody. You won't catch the people it is meant to catch. Only that surveilling your entire population minus the criminals has dire consequences for a free society.
> But you can't.
Be careful what you wish for. We could enforce client-side scanning on the OS. Everything that appears on the screen is scanned and reported.
> If you sexually abuse children the hurdle to illegally running a non-backdoored messenger seems pretty low to me.
How do you contact children on mainstream messengers if you can't use mainstream messengers?
Not to mention that most people consuming CSAM are not technically savvy. It is known that such material goes through mainstream messengers.
> Only that surveilling your entire population minus the criminals has dire consequences for a free society.
Again: I am against ChatControl. We fundamentally cannot know what is running in this client side scanner, and that makes it a risk.
But saying that it won't ever catch a single criminal is not a valid argument. You won't convince politicians to vote against ChatControl with such claims.
> Not to mention that most people consuming CSAM are not technically savvy. It is known that such material goes through mainstream messengers.
The reason is because it works. They're not stupid - they can use signal.
The reality is that the privacy options not only exist, they're really good - often better and easier to use than the mainstream stuff.
They will just pivot to other tools.
> How do you contact children on mainstream messengers if you can't use mainstream messengers?
The scanning just doesn't include contacting children - it includes CSAM. Talking to kids isn't CSAM. You're talking about something else altogether, and something which is purely hypothetical.
> The scanning just doesn't include contacting children - it includes CSAM.
My understanding is that they are not only talking about having a list of hashes (of illegal images), but also having some kind of machine learning. And they are not only talking about scanning images, but text, too.
I don't know what you expect them to report when scanning conversations with machine learning?
> Be careful what you wish for. We could enforce client-side scanning on the OS. Everything that appears on the screen is scanned and reported.
Nope. Criminals can still just build their own devices with their own operating systems. We have existing OS without spying, people have them as ISO on their harddrives. You can't stop criminals from installing an old Lineage OS build.
Legally you can't stop anybody from using encrypted channels if they are motivate unless you go out and erase the knowledge.
Implementing filters that go on all communications is undemocratic. Any future authoritarian government can use the same filter to immediately target their opposition, not just in theory, in practise. We have designed our democracies with division of powers for the simple reason that we have learned through history that giving any single kind of entity that kind of power leads to tyranny. That means whenever we give the government new superpowers we are changing a carefully setup balance. What does the citizen get for that trade? Nothing. If your power only works as long as it is in the hands of the good ones and becomes dystopian once it gets into the hands of the bad guys, maybe that power shouldn't exist.
Since we want to obviously prevent childrem from being sexually abused the best way to start according to most child-protection organizations is to start at the root. That means educating kids early on in child-suitable ways and investing in prevention and psuchological help for potential pedophiles. If children have the possibility to signal such a thing happened to adults or other persons of trust, you don't need mass surveillance.
But my guess is that CSAM is just the symbolic reason, in reality this is meant to do more. It would be perfect to target small movements as they are emerging for example.
> Nope. Criminals can still just build their own devices
Haha sure. You over-estimate many of them. Not everyone breaking the law is a professional criminal. Especially when it comes to CSAM.
> Legally you can't stop anybody from using encrypted channels if they are motivate unless you go out and erase the knowledge.
ChatControl doesn't pretend to do this: they pretend to control mainstream social media, whether encrypted or not.
> It would be perfect to target small movements as they are emerging for example.
Yes, this is my problem with ChatControl: it's a powerful tool of surveillance that would be very dangerous if abused. We don't want to create that.
But "it's useless because criminals will compile their own kernel anyway" is an invalid argument. It doesn't help the cause.
Disclaimer: I am against ChatControl.
> Does anyone believe that predators commit those heinous offenses because of the availability of encrypted channels to distribute those products of their crimes?
Who says that? I don't think they say that.
> The authorities really think every predator will just give up and stop abusing just because of that?
Nope, they think they will be able to arrest more predators.
> More likely of course, those criminals will just use [...]
You'd be surprised how many criminals are technically illiterate and just use whatever is the default.
>just use whatever is the default
gmail? google drive?
They better ban password protected zip files too!
They will when they can.
The thing that is crazy to me is that they choose to go after Signal of all things. Certainly there would be higher priority targets than a messaging app that has no social networking features to speak of, if child predators were really the target here.
This is nonsense. Anyone who has the smallest clue would use Signal for anything sensitive. Of course people would use Signal to talk about illegal stuff.
I am against ChatControl. But I am amazed by all the bullshit arguments that people find to criticise ChatControl.
If you have more control, obviously it's easier to track criminals. That's not the question at all. The question is: what is the cost to society? A few decades ago, all communications were unencrypted and people were fine. Why would it be different now? That's the question you need to answer.
You're all assuming that predators who are already deliberating using apps which are encrypted to share CSAM won't just move to something else where there is encryption – which will always be possible unless the EU fines a way to ban maths or reverts back to the pre-digital age.
This might catch the odd moron sharing stuff on Facebook or on their phone, but I doubt it will stop the average offender was is already going out of their way to use encrypted apps/services.
But okay great, at least you catch the morons I guess, but at what cost? Here in the UK it's pretty common to be arrested for tweets at it is. There's no doubt in my mind this will be used to catch individuals committing speech crimes who are currently getting away with it because they share their opinions behind closed doors.
> but I doubt it will stop the average offender
I strongly believe it will catch the average offender. The average human doesn't have a clue about cryptography.
It won't catch all of them, of course. My point is that it is invalid to say that it won't catch anyone.
> but at what cost?
EXACTLY! The problem is that whoever controls the list of illegal material can abuse it. We fundamentally cannot audit the list because the material on this list is highly illegal. There is a risk of abuse.
"It won't catch all of them, of course. My point is that it is invalid to say that it won't catch anyone."
Sure, but wouldn't they quickly learn once people are getting caught?
No, they wouldn't. People were getting caught before encrypted apps. People are still getting caught on unencrypted apps today, even if it's easy to install an encrypted app.
And predators who get in contact with kids have to do it over social media that the kids use. Those ones would be affected by ChatControl.
It was unencrypted and “it was fine“ because it was technically nearly impossible to store and process all communications. Now, one small server cluster can analyse all communication channels in a country in real time. The only thing stopping it is the encryption.
Ok, but with ChatControl, you still send your messages encrypted. They are scanned on your device.
So all communications aren't stored outside of your device, right?
All falsely flagged communication is. And there will be lots and lots of it, even if it is just a tiny fraction of the total number of messages sent, since the number of messages sent between people is so big. This is the classic problem with statistical methods looking for rare things in large populations which also is why we don't screen everyone for all illnesses all the time - the false positives would do too much harm.
You also will not know if your message is flagged, so if you are ever in doubt about how your message will be categorized, you will have to assume that it will be flagged and sent to storage for an unknown amount of time
If you care about the tiny fraction of the total number being stored by the government, frankly you should care a lot more about all the data being stored by TooBigTech.
Feels a bit hypocritical to accept one and not the other.
Really, I think that the problem with ChatControl is that it is a weapon for surveillance. Not because of the false positive, but because whoever controls it can decide what gets reported. Depending on how a government evolves, that could be very dangerous. And we have examples of governments evolving like this in history.
> If you care about the tiny fraction of the total number being stored by the government, frankly you should care a lot more about all the data being stored by TooBigTech.
And what makes you think we don't?
It's much, much easier to stop new incursions into our privacy than to claw back privacy we've already lost. And it's much, much easier to stop the government from violating our privacy than to stop megacorporations accountable to no one for anything other than profit from doing so.
I think seeing hypocrisy here is being extremely uncharitable.
I guess what I am trying to say is that the population doesn't care. If you want to convince politicians, you have to convince the population.
IMHO there are valid arguments against ChatControl that are not "you see what you allow TooBigTech to do to you? Well with ChatControl you would allow much less to the government. Isn't that terrible?"
A strong argument against ChatControl, IMO, is that it builds a powerful tool of surveillance. Not because "someone fairly random will see false positives", but because someone in power (e.g. a president) could abuse it to maintain their power (e.g. by targetting political opponents).
Where did I write that I thought that was OK? I am writing this on a de-Googled phone, I have hosted my own email for over two decades and I avoid big tech like the plague. Please stop with the whataboutism.
I share your other concern, but I think it's related to the one I mentioned. Suddenly false positives turn into true positives, but for things that were totally unrelated to the initially stated goals of Chat Control.
> If you have more control, obviously it's easier to track criminals.
So why are criminals not being tracked? Seems like there's a shit ton of cameras everywhere but stores are still locking everything valuable behind glass. The benefits of this stuff never seem to materialize in practice.
Sorry, but you won't convince anyone with that argument.
That's my point: there are good arguments against ChatControl. Better focus on them.
> A few decades ago, all communications were unencrypted and people were fine.
A few decades ago, a user base using whatever was available was about 99% lower than now. As well as governments were so illiterate that they could not read with the tech they had even those unencrypted messages.
Snowden was more than a decade ago. The NSA was recording everything.
A few decades ago implies 1990s or early 2000s. In 1990 he was 7 years old. In early 2000 - 17 years old.
a few
a small number of units or individuals
So ChatControl means that e.g. Signal would be obligated to automatically scan pictures and messages sent for CSAM. This is beyond encryption. And if they were to actually do that, it would mean it's non sensical for people spreading this material to use it as they would immediately be caught, so they would just use other tools.
But people are talking about both - the ridiculousness of the premise that this would help combat this and additionally of course the cost of privacy.
It's beyond encryption. Teenagers sending each other pictures could get flagged by AI etc. Any of your messages and images having potential to get falsely positively flagged.
So what? If predators cannot talk to children over SnapChat, that's a win, wouldn't you say?
The only valid argument I see against ChatControl is that fundamentally, you cannot know what it is reporting. It's not like if there would be an open source list of illegal material together with the hashes, right?
If you cannot audit what is being reporting (with whatever means necessary to make sure it is doing what it should be doing), then whoever controls it could abuse it.
That's the problem. That's the reason not to implement it. But it's completely overwhelmed by the flood of invalid arguments.
> The only valid argument I see against ChatControl is that fundamentally, you cannot know what it is reporting. It's not like if there would be an open source list of illegal material together with the hashes, right?
By definition, they must state what is actually illegal, lest I be hidden laws with hidden punishments.
And those lists of 'illegal' need to be publicly disclosed, so we are aware.
At least in the USA a naked picture of someone who is 17y364d old is 'child porn', but that extra day makes it 'barely legal'. But yet, most USA jurisdictions say that 16y can have sex. Just that pictures are EVIL even if you take them yourself.
Again however, I tend to more agree with Stallman that CSAM or child porn picture possession should either be legal or have a mens area attached, and not strict possession. Its proof of a crime, and shouldn't in of itself be a crime.
But because a picture is a crime, we get these horrific laws.
> By definition, they must state what is actually illegal, lest I be hidden laws with hidden punishments.
I don't need to murder you in order to say that murdering you is illegal, do I?
Of course they don't have to publish CSAM material in order to say that this is illegal CSAM material. If you could go get CSAM material at your local library, nobody would be talking about scanning it with ChatControl...
Then from a picture, tell me what the exact age of the nude person is.
Again, we need to know the exact year and day. 17y364d is illegal but 18y is legal.
Not sure how this answers my comment above.
It points out the failing of comparing an action (killing someone without cause), and passively receiving a picture.
That picture does NOT have enough information to determine if its legal or not. And even as simple as 1 day can be the difference between legal and not.
And of course, is also thethe hypocrisy of sex being legal at 16, but pictures are 'child porn', when they are demonstrably NOT children.
Well, there are MOST DEFINITELY images that are, UNAMBIGUOUSLY, VERY illegal. It is not hypocritical to say that detecting such images one a device means that someone should look into it, because something got VERY WRONG there.
So yes, MANY pictures already exist that do have more than enough information to determine that they are VERY ILLEGAL.
If you can't apprehend that, I don't know what to tell you.
I think that a world where underage children can't access tik tok and snapchat is an acceptable cost to keep our rights for privacy.
> The only valid argument
Really? The only one?
Really, yes. I am against ChatControl myself, and I am genuinely struggling to find credible messages against it.
All communications were unencrypted because encrypting them would have incurred unduly burdensome processing. Nowadays computers can encrypt and decrypt on the fly for virtually free.
Sure. Still people considered themselves free and living in democracies. Why wouldn't it be the case today?
We also didn't have AI models that politicians believed could detect bad behavior on a mass scale. Implementing Stasi level mass surveillance would be very expensive back then, even if there was full access to all communication. Now the proposal is to make a model try to categorize your messages and flag you.
I believe that politicians believe that AI models can do this well without negative consequences. But I also think they forget that a model with 99.99% specificity applied to ten million messages will still falsely label 1000 as harmful.
People using online communication system were a niche, not the norm andost people didn't have the tool and knowledge to access someone else's digital communication.
It is not the case anymore.
Most illegal things go on Telegram for some reason. I guess Signal doesn't have the required features.
It only proves that people doing illegal stuff mostly have no clue about security...
Founder in jail.
Anyone using a mobile device for CSAM is in prison by now.
Predators use mainstream social media to enter in contact with children.
Most victims of child abuse know their aggressor because it is part of their social circle: dad, mother, uncle, brother, sport coach or a friend of the parents/sibling.
Most, not all of them.
Or are you saying that we should not care about the others?
It's impossible to stop all crime without an all-encompassing surveillance state. At some point you have to set boundaries for what the state is allowed to do in the name of safety.
Agreed. But then you have to explain how you set your boundary. You can't just say that the right boundary is yours.
ChatControl will protect some kids. If your argument is "yeah, shit happens", you won't convince anyone. If your argument is "yes, but at a great cost for society", then you need to explain what the cost is.
Saying "your freedom" is worth exactly as much as "but CSAM" when said from the other side. It's not enough. What is the tangible risk of ChatControl?
>You can't just say that the right boundary is yours.
Constitution says I can. The country is governed by people.
Are you saying that scanning every single communication the only way?
What I am saying is that scanning every single communication most definitely helps preventing some CSAM.
Is it worth it? Well that's the question, and I am against ChatControl so I think it is not worth it. But if you try to convince a politician that scanning every single communication cannot ever help law enforcement, then good luck. Because you are wrong.
Why not taking valid arguments against ChatControl instead?
The same EU that let loads of rapists into Europe is telling us that they care about the people. https://en.wikipedia.org/wiki/Vienna_swimming_pool_rape https://portal.research.lu.se/en/activities/nearly-two-third...
More likely it's ironically so that the people cannot oppose such policies as they won't be able to organize when all comms are tapped.
Absolutely, evidence of abuse is secondary to the actual abuse.
Plus, the fact you could use/make AI/LLM/etc generate nefarious content that is hard to tell is fake, tells you the abuse isn't even what they are interested in.
> Best case scenario (and this is wildly optimistic) the offenders won't be able to find any 'safe' channels to distribute their materials to each other.
The theory is based on the documented fact that most crime is poorly thought through with terrible operational security. 41% is straight up opportunistic, spur of the moment, zero planning.
It won't stop technologically savvy predators who plan things carefully; but that statistically is probably only a few percent of predators; so yes, it's probably pretty darn effective. There are no shortage of laws that are less effective that you probably don't want repealed - like how 40% of murderers and 75% of rapists get away with it. Sleep well tonight.
Exactly. Econ 101: why do consumption taxes work at all? By increasing the amount of pain associated with purchasing a particular indulgent product, you decrease the consumption of that product on the margin. When you increase the price of cigarettes by 20%, cigarette smoking in a society decreases. But for the most addicted, no consumption tax will probably act as a deterrent.
Some individuals will find a way to distribute and consume child pornography no matter the cost. But other addicted individuals will stop consuming if doing so becomes so laborious because they are consuming or distributing on the margin. I.e, imagine the individual who doesn't want to be consuming it, who knows they shouldn't—this type of deterrent may be the breaking point that gets them to stop altogether. And if you reduce the amount of consumption or production by any measure, you decrease a hell of a lot of suffering.
But anyway, the goal of this legislation is not to drive the level of distribution to 0. The goal of policymakers could be seen charitably as an attempt to curtail consumption, because any reduction in consumption is a good thing.
Exactly my point, but also, to add to it:
Let's say you're actually texting in a group. Even if you use perfect operational security, odds are terrible that all members of your group will perfectly uphold the same level of security every time they share their content.
One is going to slip up. He's going to get arrested. And he's going to turn the whole group in to reduce his sentence. Everyone else meanwhile has their operational security become proof of intent, proof of deliberation, proof of trying to evade authorities. They thought they were clever with the encrypted ZIP files, but the judge and jury are going to be merciless. I don't think most authorities have a problem with that.
Wait. Are you calling child pornography an ”indulgent product?”
Was referring to tobacco, alcohol, soft drinks etc
That's not a bug, that's a feature. They'll say that current surveillance tools are insufficient, and demand more.
Best case scenario would be, lots of children will be saved from abuse because the magic software somehow discovers that. I kind of doubt it though.
No, you don’t get it. Hosting or possessing CSAM has criminal penalties even if no children were involved. For example AI generated imagery.
In fact, even if zero children are ever trafficked or abused going forward, and pedophiles only use old photos of children from 30 years ago, merely having these images is still an issue.
Conversely, the vast majority of sexual abuse of minors doesn’t involve images and goes unreported. "Considerable evidence exists to show that at least 20% of American women and 5% to 10% of American men experienced some form of sexual abuse as children" (Finkelhor, 1994). "Most sexual abuse is committed by men (90%) and by persons known to the child (70% to 90%), with family members constituting one-third to one-half of the perpetrators against girls and 10% to 20% of the perpetrators against boys" (Finkelhor, 1994).
In short - if they wanted to reduce child abuse, scanning everyone’s communications for CSAM would not be the most straightforward way to go about it.
"No, you don’t get it."
Did you get my last sentence?
"In short - if they wanted to reduce child abuse, scanning everyone’s communications for CSAM would not be the most straightforward way to go about it."
What would be the most straightforwand way? Install a camera in every home?
Yes, abuse is usually more to be found inside families. And the solution kind of complicated, involving social workers, phone numbers victims can call, safe houses for mothers with children to flee into, police officers with sensitive training who care, teachers who are not burned out to actually pay attention to troubled kids ...
How do they know it's unreported if it's unreported? They mean unreported to police but reported in scientific self-report surveys?
> if they wanted to reduce child abuse, scanning everyone’s communications for CSAM would not be the most straightforward way to go about it.
* First, this is not what politicians do. What they want is to look like they are fighting it.
* Second, what is your more straightforward way to fight CSAM? Asking for a backdoor is pretty straightforward, I find. I would rather say that fighting CSAM is more difficult than that.
>The authorities really think every predator will just give up and stop abusing just because of that? What a joke.
Yes, the framing is disingenuous, but so is yours. You're seriously suggesting that any policy that doesn't 100% eliminate a problem is a joke?
If the cost of the proposal is "let's throw democracy under the bus" as it is in this case, it better be damn close to 100% effective to be worth it!
I have a hard time imagining this will be more than 10% effective.
This proposal is a joke
It's going to be 0% effective. It won't take long for criminals to use their own encrypted communication systems, and only law-abiding citizens will be monitored.
And then you'll get into a scenario that the government will punish you for wrong-speak, like when people had their bank accounts frozen for donating to the trucker protests. Or they will turn off your access to social media the way the Biden Administration did during the Pandemic.
A few decades ago, all communications were unencrypted. Would you say that democracies did not exist then?
This is completely untrue! Important communications have always been enciphered since language has been created I’d wager, whether that cipher is specific terms (grog means attack that person in 10 seconds!) or a book cipher, e.i. The first letter of a bible verse than the second letter of the next verse etc. Humans have been encrypting communication since communication was possible.
It is now only recently possible to dragnet in mass many communications, store, and analyze them. The past decades have brought new threats to privacy democracy through breaking encryption at the state scale.
> Humans have been encrypting communication since communication was possible.
Were most people encrypting their handwritten letters? Were most people encrypting their messages before sending them by SMS or with WhatsApp? Really?
No, because there was an expectation of privacy. That expectation is no longer there.
Privacy from who? Law enforcement has been leveraging that forever.
But ChatControl won't prevent the encryption for anyone who is not the receiver of the reports. And the receiver is the equivalent of "law enforcement", right?
The scalability of spying has exploded. Back before re-election comms, the government had no way to spy on communications and sieve out opposers - now they do, with encryption the only thing standing in the way.
>Privacy from who? Law enforcement has been leveraging that forever.
Not without legal proceedings. The population would have been absolutely outraged if the government just decided to read all of their mail one random day in the 90s.
There's a reason the whole idea was supposed to be a conspiracy theory, the population literally didn't believe something like that could happen.
I think that there is a big difference, for the population, between "somebody is reading and keeping a copy of all your mail" and "Some algorithm looks for illegal material locally on your phone. If you don't have illegal material, it won't do anything".
Nobody would want to carry a microphone recording them 24/7 and storing everything on a server, but everybody is fine with TooBigTech simply promising that they don't store the data.
We have to accept that people are fine with the idea. The problem (both with the connected mic and ChatControl, btw) is that it can be abused. That's the problem. Again: we have to convince people that it is at risk of being abused. Not that they should be outraged. They just are not.
>TooBigTech simply promising that they don't store the data.
Instead they notify you that you gave them perpetual license to reuse your data.
Not most but some.
Are least where I'm from, there are pretty strong laws against reading snailmail post of others. To this day, any law enforcement that tries to open people's snail mail will laughed out of the courtroom, and quite possibly out of their jobs too!
Today nobody uses snail mail. This proposal is the equivalent of proposing to read everyone's private letters back in the day.
Technical details are technical details
A few decades ago, few communications were tracked. When everything is tracked (as it is now), the only way to have privacy is with encryption.
Snowden said otherwise, more than a decade ago.
Which part are you disputing?
The fact that ChatControl is killing democracies.
It's a tool that could be abused, but I wouldn't say that it is enough to kill a democracy all by itself.
To make a silly analogy: A stone in go has 4 liberties. Take away all four and the stone dies.
Chat control takes away one liberty from democracy.
Ask any half decent go player what will happen to that stone if we just ignore the attack upon it?
If they suspect that you own CSAM material, law enforcement will check your devices. Actually if they have convincing arguments, the way they get access to your devices may touch your physical integrity.
You don't have the liberty to avoid that, today. By design.
My point being that if one could prove that the ChatControl detection is only running locally and that it is only reporting what's acceptable to report, then it wouldn't hurt your freedom (except for your freedom to do illegal stuff, but that's the whole point).
The problem is that it is not possible to prove this. Fundamentally. We need to talk about that. Not throw some "it will kill democracy because you should trust me when I say it".
But chat control will have "master keys"to all communication.
That key will leak eventually, it's too juicy a target.
You describe chat control as if it's just an AI csam scanner that runs locally on your phone, like what apple did recently-ish.
Chat control is so much more than that, and so much worse
> But chat control will have "master keys"to all communication.
That's not my understanding. My understanding is that ChatControl will run client-side scanning and report what is deemed illegal.
This is not a master key to all communications.
Last time I checked, the preferred method was to simlify known CSAM material enough that you can hash the result, then repeat in the client end and hope nothing else has the same hash.
Which makes it even less of a problem than what people say. I see that as an argument in favour of ChatControl. If it really "just" compares hashes locally, then the claims that it breaks encryption is even more wrong.
But how is this supposed to protect children if say an abuser takes pictures or videos of the victim?
I don't see this as a valid argument. You can't say "I can find a situation where ChatControl does not help, therefore ChatControl is always useless".
On the contrary, it is an argument in favour of extending ChatControl to using machine learning for detecting such cases.
The problem, again, is that we don't want to have an opaque system that can be extended to surveilling new things, because it's very difficult to audit and make sure it is not abused.
Encryption is not the only privacy assurance that exists in democracies. For example, the government is (or at least was[1]) not allowed to open your mail. You could send CSAM Polaroids back and forth and nothing would happen.
Chat Control amounts to routine, warrantless interception of private communication. Something you see in states like the USSR.
[1] https://www.westernstandard.news/news/liberals-push-bill-to-...
Well, what is "the problem"? Is it children being abused, or is it the distribution of CSAM?
And if you say both - how would you rate the relative severity of the two problems? Specifically, if you had to pick between preventing the rape of a child, and preventing N acts of CSAM distribution, how big would N have to be to make it worth choosing the latter?
I don't think they care what N is, they are just scapegoating a vile group they know will have no defenders, and they can use it to silence the critics by associating them with that group.
Bingo.
Today its the pedophiles and 15-17-philes (those are this fake group adolescent, which are also tried as adults when convenient).
Tomorrow, its the adult sex workers.
Then its the fringe group's topics that is on the outs with the majority.
Then they come for you, and nobody is able to speak up because they banned protests.
... To paraphrase Martin Niemoller.
> Well, what is "the problem"? Is it children being abused, or is it the distribution of CSAM?
It seems obvious that it is entirely the former and not at all the latter. In other words, N is positive infinity. Am I missing something?
I only care about kids being hurt. And I think this view is close to consensus.
Ask anyone you know who has been sexually assaulted or raped what they think of the idea of pictures or recordings of that being both kept by the perpetrator and widely disseminated. I think you'll find very few who'd say that's totally fine. But given that there can be no CSAM without child abuse, the direct physical abuse is clearly the primary problem.
What do you think it would be for you?
What's worse for you? Being raped as a child. Or, having people sexually gratify themselves looking at images of you being abused; using those images to groom other children, or to trade and encourage the rape of other children?
You might as well ask someone which eye they prefer to have gouged out with a blunt screw.
Let's do both: try to stop child sexual abuse and try to stop images of abused children being used by abusers.
> You're seriously suggesting that any policy that doesn't 100% eliminate a problem is a joke?
I think a more charitable reading is that any policy that doesn't 100% _target_ a problem is a joke. This policy doesn't have a plausible way that it will protect children from being victimized, so I think it's reasonable to remove the "think of the children" cloak it's wearing and assess it on the merits of whether encryption is beneficial for the social discourse of a society.
> This policy doesn't have a plausible way that it will protect children from being victimized
Of course it does. "It will detect and report messages from predators to children, therefore preventing the child to get to the point where they send revealing pictures or meet the predator in person". Done.
Well, maybe the word "plausible" is doing too much work in my statement.
Most abuse happens from people known to the child, and of that portion, most are family members. It seems like there is sufficient opportunity in-person comms to route around this limitation.
Moreover, even the communications that do happen online can still easily happen through encrypted media; presumably the perpetrators will simply move to other ways of communicating. And kids, at least kids over 10 or so, don't seem like a demographic particularly likely to follow this law anyhow.
There's another nuance worth considering: by and large, parents _want_ their kids to have access to encrypted communications. I'll happily assist my kiddo in maintaining good opsec - that's much more important to me than some silly and uninformed policy decision being made far away by people I've never met.
https://web.archive.org/web/20210522003136/https://blog.nucy...
So, the kids are still going to be where the encrypted comms are. I still think it's reasonable to say that the protections offered to kids by criminalizing encryption are implausible.
> Most abuse happens from people known to the child
Sure, but it means that at least some happen from people unknown to the child. If ChatControl doesn't cause any problem but helps preventing those abuses, then it's worth it. The question is: what are the problems caused by ChatControl?
Saying "only a minority of children get abused this way, so it's not worth it" won't go far, IMO. It's not a valid argument against ChatControl in itself.
> presumably the perpetrators will simply move to other ways of communicating.
The perpetrators have to contact kids over apps that the kids use. Like Snapchat or TikTok. It's not like the kids will routinely install a weird app to talk to weird people...
> parents _want_ their kids to have access to encrypted communications.
But ChatControl doesn't remove the encryption! It scans everything locally, before it gets encrypted and sent.
> by criminalizing encryption
It's not criminalizing encryption: it's forcing a local scan on your device. Just like there are already scans happening on clouds for non-E2EE data.
Don't get me wrong: I am against ChatControl. For me the problem is that I see a potential for abuse with the "list" (whether it's a list or a sum of weights) of illegal material. This list cannot be made public (because it's highly illegal material), so it's hard to audit. So whoever has control over it can abuse it, e.g. to find political opponents. That's my problem with ChatControl.
Wow, the only prosaic take in this entire dumpster-fire of a thread.
Can you point me to the doc that says all this?
Admittedly I didn't yet bother looking for the official ruling since those are massive, and media sites were just politicising it.
I don't have a doc to point to, it's just my understanding of ChatControl :-).
Is text-only CSAM even a thing?
It is ! https://en.m.wikipedia.org/wiki/ASCII_porn
The calculus is simple:
- People who go into government want power over others, and they'd therefore prefer more power than less at every junction
- The population put up very little resistance in general - e.g. as we saw with COVID, they'll freely let the government put them under house arrest and destroy their businesses, to protect them against a bug only slightly deadlier than the flu
- However they do need some pretext - with none at all, impinging on people's civil liberties would cause confusion, and the people might even ponder grumbling about it, which would be unnecessarily messy for everyone involved
- Hence they simply need to lay out some half-assed pretext, just to make it easy for people to internally justify their further loss of rights, so that they don't have to have any uncomfortable thoughts like "maybe I should disagree with this?"
- "Think of the children" is a nice easy one. It's basically a farcical meme at this point, but that doesn't affect its effectiveness, for the reasons described. You could use this for literally anything, no matter how extreme (I'm sure for instance the Nazis used it as one reason for exterminating Jews - to protect "Aryan" children & their future etc)