The thing that is crazy to me is that they choose to go after Signal of all things. Certainly there would be higher priority targets than a messaging app that has no social networking features to speak of, if child predators were really the target here.

This is nonsense. Anyone who has the smallest clue would use Signal for anything sensitive. Of course people would use Signal to talk about illegal stuff.

I am against ChatControl. But I am amazed by all the bullshit arguments that people find to criticise ChatControl.

If you have more control, obviously it's easier to track criminals. That's not the question at all. The question is: what is the cost to society? A few decades ago, all communications were unencrypted and people were fine. Why would it be different now? That's the question you need to answer.

You're all assuming that predators who are already deliberating using apps which are encrypted to share CSAM won't just move to something else where there is encryption – which will always be possible unless the EU fines a way to ban maths or reverts back to the pre-digital age.

This might catch the odd moron sharing stuff on Facebook or on their phone, but I doubt it will stop the average offender was is already going out of their way to use encrypted apps/services.

But okay great, at least you catch the morons I guess, but at what cost? Here in the UK it's pretty common to be arrested for tweets at it is. There's no doubt in my mind this will be used to catch individuals committing speech crimes who are currently getting away with it because they share their opinions behind closed doors.

> but I doubt it will stop the average offender

I strongly believe it will catch the average offender. The average human doesn't have a clue about cryptography.

It won't catch all of them, of course. My point is that it is invalid to say that it won't catch anyone.

> but at what cost?

EXACTLY! The problem is that whoever controls the list of illegal material can abuse it. We fundamentally cannot audit the list because the material on this list is highly illegal. There is a risk of abuse.

"It won't catch all of them, of course. My point is that it is invalid to say that it won't catch anyone."

Sure, but wouldn't they quickly learn once people are getting caught?

No, they wouldn't. People were getting caught before encrypted apps. People are still getting caught on unencrypted apps today, even if it's easy to install an encrypted app.

And predators who get in contact with kids have to do it over social media that the kids use. Those ones would be affected by ChatControl.

It was unencrypted and “it was fine“ because it was technically nearly impossible to store and process all communications. Now, one small server cluster can analyse all communication channels in a country in real time. The only thing stopping it is the encryption.

Ok, but with ChatControl, you still send your messages encrypted. They are scanned on your device.

So all communications aren't stored outside of your device, right?

All falsely flagged communication is. And there will be lots and lots of it, even if it is just a tiny fraction of the total number of messages sent, since the number of messages sent between people is so big. This is the classic problem with statistical methods looking for rare things in large populations which also is why we don't screen everyone for all illnesses all the time - the false positives would do too much harm.

You also will not know if your message is flagged, so if you are ever in doubt about how your message will be categorized, you will have to assume that it will be flagged and sent to storage for an unknown amount of time

If you care about the tiny fraction of the total number being stored by the government, frankly you should care a lot more about all the data being stored by TooBigTech.

Feels a bit hypocritical to accept one and not the other.

Really, I think that the problem with ChatControl is that it is a weapon for surveillance. Not because of the false positive, but because whoever controls it can decide what gets reported. Depending on how a government evolves, that could be very dangerous. And we have examples of governments evolving like this in history.

> If you care about the tiny fraction of the total number being stored by the government, frankly you should care a lot more about all the data being stored by TooBigTech.

And what makes you think we don't?

It's much, much easier to stop new incursions into our privacy than to claw back privacy we've already lost. And it's much, much easier to stop the government from violating our privacy than to stop megacorporations accountable to no one for anything other than profit from doing so.

I think seeing hypocrisy here is being extremely uncharitable.

I guess what I am trying to say is that the population doesn't care. If you want to convince politicians, you have to convince the population.

IMHO there are valid arguments against ChatControl that are not "you see what you allow TooBigTech to do to you? Well with ChatControl you would allow much less to the government. Isn't that terrible?"

A strong argument against ChatControl, IMO, is that it builds a powerful tool of surveillance. Not because "someone fairly random will see false positives", but because someone in power (e.g. a president) could abuse it to maintain their power (e.g. by targetting political opponents).

Where did I write that I thought that was OK? I am writing this on a de-Googled phone, I have hosted my own email for over two decades and I avoid big tech like the plague. Please stop with the whataboutism.

I share your other concern, but I think it's related to the one I mentioned. Suddenly false positives turn into true positives, but for things that were totally unrelated to the initially stated goals of Chat Control.

> If you have more control, obviously it's easier to track criminals.

So why are criminals not being tracked? Seems like there's a shit ton of cameras everywhere but stores are still locking everything valuable behind glass. The benefits of this stuff never seem to materialize in practice.

Sorry, but you won't convince anyone with that argument.

That's my point: there are good arguments against ChatControl. Better focus on them.

> A few decades ago, all communications were unencrypted and people were fine.

A few decades ago, a user base using whatever was available was about 99% lower than now. As well as governments were so illiterate that they could not read with the tech they had even those unencrypted messages.

Snowden was more than a decade ago. The NSA was recording everything.

A few decades ago implies 1990s or early 2000s. In 1990 he was 7 years old. In early 2000 - 17 years old.

a few

a small number of units or individuals

So ChatControl means that e.g. Signal would be obligated to automatically scan pictures and messages sent for CSAM. This is beyond encryption. And if they were to actually do that, it would mean it's non sensical for people spreading this material to use it as they would immediately be caught, so they would just use other tools.

But people are talking about both - the ridiculousness of the premise that this would help combat this and additionally of course the cost of privacy.

It's beyond encryption. Teenagers sending each other pictures could get flagged by AI etc. Any of your messages and images having potential to get falsely positively flagged.

So what? If predators cannot talk to children over SnapChat, that's a win, wouldn't you say?

The only valid argument I see against ChatControl is that fundamentally, you cannot know what it is reporting. It's not like if there would be an open source list of illegal material together with the hashes, right?

If you cannot audit what is being reporting (with whatever means necessary to make sure it is doing what it should be doing), then whoever controls it could abuse it.

That's the problem. That's the reason not to implement it. But it's completely overwhelmed by the flood of invalid arguments.

> The only valid argument I see against ChatControl is that fundamentally, you cannot know what it is reporting. It's not like if there would be an open source list of illegal material together with the hashes, right?

By definition, they must state what is actually illegal, lest I be hidden laws with hidden punishments.

And those lists of 'illegal' need to be publicly disclosed, so we are aware.

At least in the USA a naked picture of someone who is 17y364d old is 'child porn', but that extra day makes it 'barely legal'. But yet, most USA jurisdictions say that 16y can have sex. Just that pictures are EVIL even if you take them yourself.

Again however, I tend to more agree with Stallman that CSAM or child porn picture possession should either be legal or have a mens area attached, and not strict possession. Its proof of a crime, and shouldn't in of itself be a crime.

But because a picture is a crime, we get these horrific laws.

> By definition, they must state what is actually illegal, lest I be hidden laws with hidden punishments.

I don't need to murder you in order to say that murdering you is illegal, do I?

Of course they don't have to publish CSAM material in order to say that this is illegal CSAM material. If you could go get CSAM material at your local library, nobody would be talking about scanning it with ChatControl...

Then from a picture, tell me what the exact age of the nude person is.

Again, we need to know the exact year and day. 17y364d is illegal but 18y is legal.

Not sure how this answers my comment above.

It points out the failing of comparing an action (killing someone without cause), and passively receiving a picture.

That picture does NOT have enough information to determine if its legal or not. And even as simple as 1 day can be the difference between legal and not.

And of course, is also thethe hypocrisy of sex being legal at 16, but pictures are 'child porn', when they are demonstrably NOT children.

Well, there are MOST DEFINITELY images that are, UNAMBIGUOUSLY, VERY illegal. It is not hypocritical to say that detecting such images one a device means that someone should look into it, because something got VERY WRONG there.

So yes, MANY pictures already exist that do have more than enough information to determine that they are VERY ILLEGAL.

If you can't apprehend that, I don't know what to tell you.

I think that a world where underage children can't access tik tok and snapchat is an acceptable cost to keep our rights for privacy.

> The only valid argument

Really? The only one?

Really, yes. I am against ChatControl myself, and I am genuinely struggling to find credible messages against it.

All communications were unencrypted because encrypting them would have incurred unduly burdensome processing. Nowadays computers can encrypt and decrypt on the fly for virtually free.

Sure. Still people considered themselves free and living in democracies. Why wouldn't it be the case today?

We also didn't have AI models that politicians believed could detect bad behavior on a mass scale. Implementing Stasi level mass surveillance would be very expensive back then, even if there was full access to all communication. Now the proposal is to make a model try to categorize your messages and flag you.

I believe that politicians believe that AI models can do this well without negative consequences. But I also think they forget that a model with 99.99% specificity applied to ten million messages will still falsely label 1000 as harmful.

People using online communication system were a niche, not the norm andost people didn't have the tool and knowledge to access someone else's digital communication.

It is not the case anymore.

Most illegal things go on Telegram for some reason. I guess Signal doesn't have the required features.

It only proves that people doing illegal stuff mostly have no clue about security...

Founder in jail.

Anyone using a mobile device for CSAM is in prison by now.

Predators use mainstream social media to enter in contact with children.

Most victims of child abuse know their aggressor because it is part of their social circle: dad, mother, uncle, brother, sport coach or a friend of the parents/sibling.

Most, not all of them.

Or are you saying that we should not care about the others?

It's impossible to stop all crime without an all-encompassing surveillance state. At some point you have to set boundaries for what the state is allowed to do in the name of safety.

Agreed. But then you have to explain how you set your boundary. You can't just say that the right boundary is yours.

ChatControl will protect some kids. If your argument is "yeah, shit happens", you won't convince anyone. If your argument is "yes, but at a great cost for society", then you need to explain what the cost is.

Saying "your freedom" is worth exactly as much as "but CSAM" when said from the other side. It's not enough. What is the tangible risk of ChatControl?

>You can't just say that the right boundary is yours.

Constitution says I can. The country is governed by people.

Are you saying that scanning every single communication the only way?

What I am saying is that scanning every single communication most definitely helps preventing some CSAM.

Is it worth it? Well that's the question, and I am against ChatControl so I think it is not worth it. But if you try to convince a politician that scanning every single communication cannot ever help law enforcement, then good luck. Because you are wrong.

Why not taking valid arguments against ChatControl instead?