" And you want to intercede on an arbitrary method you just made up,"

No, they literally identified a plausibly sensible policy flag, not some arbitrary action.

These flags are used in literally every system imaginable.

They they don't conform to some hard criteria, to your criteria, or to some working or ideological group's criteria is a bit besides the point.

Every system has these for good reason.

We have laws and regulations for all sorts of things to help people - including children and parents - in a complex society.

"The state has no business listening in on private citizen's communication."

The absolutely do, depending on circumstances. While Facebook is not a place for state monitoring, it's definitely in the public interest if they flag something that is 'very bad' by some reasonable criteria, so that the state can then act if necessary. They do so within the boundaries of the law subject to judicial oversight.

Facebook is a popular social network, a place that they want people to feel imminently safe. It's a Starbucks lounge without coffee - not a 'personal hyper protected zone'.

Other places, such as Signal, Telegram etc. can have different levels of privacy aka e2e given the different offering and expectations of privacy.

Facebook more or less wants to offer a relatively safe place where the kids can hang out, where they know crazy people are not going to attack their kinds. It's a community centre not a hacker zone.

If we can get past that, then we can move onto basic issues of privacy, advertising etc. which are damaging to everyone, especially young people, for which Facebook has perverse incentives.

"The state has no business listening in on private citizen's communication."

The absolutely do, depending on circumstances.

So primary is this concept of privacy, that it requires an entire legal framework, evidence of potential wrongdoing, proof that there is no other method to achieve the goal of validating guilt, proof that the crime is severe, and not a hunting expedition, approval via a warrant after a judge has examined that evidence, and strict controls around the entire usage of that warrant.

Wikipedia says:

Lawful interception is officially strictly controlled in many countries to safeguard privacy; this is the case in all liberal democracies.

Using this edge case as "depending on circumstances" is clearly not the generic I was referencing. The statement that

"The state has no business listening in on private citizen's communication."

Is valid, correct, accurate. Listing edge cases, is not invaliding the rule. It is the exception to the rule, and considering the sheer volume of communication, compared to the volume actively tapped in a legal means, it is the most edge case of edge cases.

There is no reason I would deem a mega-corp to somehow be OK to do what I would demand the state not. That our democratic societies have deemed that our states should not.

To highlight that, the phone companies of old would be in infinitely hot water, should they listen to communication between customers, in any fashion.

A platform is not a parent, should not police, should not act as an arm of the state, or as an arm of parents, except as I stipulated, by direct request of the parents, and only to enable the parents to be a guardian. Under no circumstances should that involve the platform scanning anything, instead, the platform could simply give parents direct access to a child's account.

" that it requires an entire legal framework, evidence of potential wrongdoing, proof that there is no other method to achieve the goal of validating guilt"

No it doesn't.

Life is no Reddit, lawyers and technicalities.

It's made up of regular people in communities.

If you see some guy creeping on 10 year-olds, you can notify the police and Facebook will do that as well - for the same reason.

It may not at all need to involve 'state surveillance', and Meta can probably hand over whatever they want to the police in that circumstance.

The police can make a decision as to how to proceed.

A bit like if someone was harassing someone on the street.

Or if an unknown person starts hanging out outside by a schoolyard in a way that seems inappropriate.

We don't want to transgress people's rights but we also are going to look at 'negative signals'.

You've quoted out of context, eliminating:

"The state has no business listening in on private citizen's communication."

So yes, the concept of privacy is so primary that the it requires an entire legal framework for the state to listen in.

--

In terms of the rest of your post, even though you quoted out of context, what you're saying is fine. But the people noticing things on the street, have nothing to do with those who maintain the roads. You really don't want corporations to have algorithms which mean they have to report trigger words to the police or state.

Instead, as I said, empower the parents. Legal guardians. It's their job to watch.

" You really don't want corporations to have algorithms which mean they have to report trigger words to the police or state"

They already do.

The entire financial system, all of social media, and many organizations past a certain size.

I did not quote out of context - the commenter was missattributing context.

I absolutely did not misattribute my own context, whatever that action means.

And some things are reported, others are not, point being, yes E2E isn't reported for obvious reasons. Loads of stuff isn't reported on social media; in fact, that's the absurd complaint against Meta!

And regardless of what is done now, that doesn't mean we want it. I didn't say it is or isn't done, I said "You really don't" want that. The more encroachment in that realm, the less free a people are.

You 'misatributed' the quote by extending my general notion to some specific notion of privacy.

We 100% absolutely do want 'basic surveillance' on many systems, and it's not even an argument.

It's like saying 'We shouldn't have police, because they are oppressive!' and assuming things would just carry on and not go to pot.

It's a wild assertion.

Formally - the entire financial system is about attribution, fraud, monitoring and security.

That's probably more than 1/2 of the function.

Your money would not be safe if your bank didn't have good controls, or if we did not have good regulations around those functions.

It's why if you send > $10K overseas, it gets flagged. We generally want this, though obviously within a regulated context.

Less formally, we absolutely, 100% do want the 'Starbucks employees' to have enough common sense to call the police or to flag something if there is some creepshow doing something that may be 'legal' but is obviously not appropriate - within reason.

Starbucks has not only 'policy' around behaviour, but also we have 'common sense' as a society.

It's not even remotely contentious that Starbucks is both private property and can set some 'terms' , but that it's also a regular community locale, with social conventions.

Just as Facebook - and many (most places) like that are 'community hang outs' - subject to regular social conventions, established by the 'owners'.

They're not 'no-identity-hacker-zones' for folks to publish their freak-ware or whatever, with ultra privacy guarantees.

Conversely - yes - it's just as important that if people want to establish their 'hacker-zones' - they can do that. That's important. And obviously Facebook has to be subject to some minimal privacy regulations.

But most places will have some degree of social overview (like literally the grocery store would have) and 'that's normal' in any civil society.

It's already pervasive because it's impossible to have basic social function without them.

Read the story about the former Twitter CEO who talks about this kind of thing pre-Elon Musk. 'Moderation' is most of the job and by far the hardest thing. We think of it as 'back end systems' it has almost nothing to do with that. It's the 'social' part of the 'social network' that's the key part. Moderation.

With e2e encryption, the signals you have are pretty minimal.

Let's say a 40 y/o man finds a phone on the ground, sees a name stuck on it, googles "name + town" and finds the facebook of a 12 y/o girl, and messages "Hey I found this phone, do you recognize it? <photo>"

With e2e encryption, you can't easily tell the difference between that and a creep.

This thread is advocating that exactly that case should result in a police visit with the assumption of guilt.

The world is nuanced.

Imagine no e2e for a moment for FB. Policy can be smart enough to pick up that this communication is not represntative or normal. That's part of detection.

Second, a single message to someone on a random phone is not going to flag anything.

Third - there is no assumption of guilt. Not even an arrest is assumption of guilt.

Finally - those are extraordinary corner cases. They will happen, but the get resolved the moment the guy says 'oh, I found this phone' - because that will be 100% clear in that context.

Obviously - things can go awry. Meta flag something as bad, sends it to police - they do not follow procedure, or don't apply something correclty and arrest a guy at his place of work. But in the scenario you described, its literally not a problem - there are 'common sense checks' through the whole thing. The algo, the human making the notification to the police, the police, the judge if a warrant is required. People are not going to be arrested because they found a phone and texted their niece - if that happens, then we have another set of problems.

We can 100% have our 'friendly community' with Facebook.

Now - with an e2e thing like Signal, well, yes, it could theoretically be a problem, but the likelihood of some rando finding a phone, that's not locked, and being able to text some other 12 year old, an effectively 'pose' as their 'contact' - well that's a rare case scenario.

You build the strallman to destroy. We are not talking state, we are talking the social network which advertises itself as safe to children, absolutely has metadata for approximate age and social connections, where one can identify as minor deserving protections, and which social network prefers to increase engagement at *any* cost to its users.