Many will cheer for any case that hurts Meta without reading the details, but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption:
> The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
The New York case has explicitly gone after their support of end-to-end encryption as a target: https://www.reuters.com/legal/government/meta-executive-warn...
The correct nuance here is...
* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
I think this is the way. Not control, but just make it simpler for parents to handle their childrens devices. You dont have to make everyone share their age, you just make it so that parents can in a simpler way choose what the children should be able to access. Make it easy to do right, dont add more control. Its kind of the old anti-piracy copyprotections. The pirates always cracked it, and in the end, the ones who got to sit there trying to figure out what is the word in the manual is the user who actually paid for the game. So making it worse for the ones who paid, and better for the cracked version. So, make it simple.
Children are not just the responsibility of parents it is the job of society-through government- to protect and support them.
So no we can't just tell parents to deal with it.
The laws being made to “protect” children seem to always have the side effect of removing security and privacy protections from adults.
There has to be a middle ground.
The tech community has had pretty much free reign over the last few decades, and has always chosen adult convenience over child safety (and mostly profit over both). The "middle ground" probably involves a bigger transfer than this.
There's probably a much better solution than "adults vs children" but very few with our expertise seem seriously interested in solving for safer children, which essentially leads to inexpert solutions gaining popular support.
I won't call myself an expert in this field, and haven't given it much thought, but a couple options just off the top of my head...
1. Limit child accounts to "classic" social network functionality. They get to see things from mutual friends. No algorithmic feeds, kids aren't in the user search, and no way for messages to be sent/received unless both sides have consented.
2. Disable chat for child accounts. How many chat apps do children really need? Each one is another potential vector for issues that parents would need to monitor.
I'm sure there is a monkey paw here, but either option seems better than no end-to-end encryption for anyone, at a time when government surveillance is a bigger issue than ever.
Frankly, I think option 1 would be better for all users, not just children. Go back to classic "social networks". This "social media" experiment has failed.
This is not a widely shared opinion.
Is that definitely not a widely shared opinion?
I feel like I can think of lots of situations where society puts in to protect children rather than leaving it to the parents (age ratings on films and games, YouTube Kids, regulations around advertising to children, the whole concept of school, reduced speed limits around playgrounds to give a few examples off the cuff).
It's fine as long as the state aligns with you and is generally sane.
History shows its not necessary always the case.
My point more was that you said it wasn't a widely shared opinion, but to my mind, it is broadly the status quo. Whether that is good or bad is a separate point.
"it take a village ...."
depends on your culture.
Since when was DJ David Guetta a reactionary?
(Yes, I read your comment history. I was tryinna figure out if you were in fact that guy and saw a bunch of conspiracy-ass right wing misinformation.)
Im not the guy obviously.
Its not reactionary to say you dont want the state to interfere too much in your child éducation.
Whether you are left or right its fine as long as the state aligns with you. But if you open an history book, you will sée it VERY OFTEN happened that states get crazy / ideological or just plain eugénist / liberticide.
Its dangerous to give too much power to the state
I think getting the age thing correct is key to get parental classification to work properly(I think now platforms just ask for a birth date which is lame) e.g
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO
Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.
*Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.
> only via community ombudsman/age verifiers
Why do you need any third parties whatsoever? Just have the parents do it. They configure a setting in the kid's device which the device uses to determine what content to display. All you need from the app/service is a rating for the content. No third parties should never have to know anything about the user, because the user's device knows that, and the device knows it because the parents do.
This all depends on fantasy tech and/or totalitarian control of tech.
Who verifies that the person verifying the child's age is actually authorised to do that? Who verifies that verification? And so on up. This needs a chain of trust that can only end up at government. And that chain of trust will then be open to being abused by shitty politicians.
What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?
What you propose here is the death of open computing. And I personally believe that we would be much better off as a species if we kept open computing and just taught our kids how to handle social media better.
> What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?
This one is easy. You just don't require all devices to do that. The parent isn't required to give the kid a general purpose computer. You don't need to prevent every device from running DOOM, only one device, and then parents who want to impose such restrictions get the kid one of those.
Thanks for the response. Couple of points:
- The line between "general purpose computer" and "not that" is weird. Android is an implementation of Linux, after all. Probably the best example is a Steam Deck. It's just Arch Linux, you can get to a desktop on it no problem, and you get sudo access and can install whatever you like on it. Are you saying that Responsible Parents should not get their kids a Steam Deck?
- And that raises the point of how responsible are we making parents for technical decisions that they do not necessarily have the knowledge to implement? If a child works out how to circumvent the age restriction and look at boobies (or whatever) and an authority finds out, are the parents liable? Are they likely to be prosecuted? Isn't this just adding more burden and bureaucracy to the job of parenting?
> Are you saying that Responsible Parents should not get their kids a Steam Deck?
I'm saying Authoritarian Parents should not get their kids a Steam Deck. If the kid can run arbitrary code then they can get a VPN and access websites hosted in Eastern Europe and then any of this is moot because there is no law you can impose on Facebook to do anything about it.
> If a child works out how to circumvent the age restriction and look at boobies (or whatever) and an authority finds out, are the parents liable?
No, because the parents rather than the "authorities" (who TF is that anyway?) should be the ones in charge of the decision whether the kid can look at boobies to begin with.
I bought my Steam Deck not knowing that it had Desktop Mode. And I'm an experienced software dev. The average parent is not going to know this.
The devices that offer a mode that blocks all unapproved content are presumably going to advertise it. If you buy something that doesn't say it has anything like that, and then it doesn't, that's the expected result. If you buy a device that says it does and then it doesn't, now you have a bone to pick with the OEM.
It’s very hard to control kids internet access. Impossible really. Even if you do it fine at home, once they go to school it’s whatever policies the school has. Most require laptops and provide internet access.
> it’s whatever policies the school has.
so the school takes on that responsibility, as deputized by the parents.
Kids don't get unfettered access to the streets while at school. They can't take their bikes and ride out at will. What makes the internet and devices any different? The devices provided by the school should be lockdown-able, and kids should not be provided their own device unless there's a parental lock (which is enabled during school hours, and is similarly locked down).
The school does not take responsibility. Schools will tell you what you kid does at home is the parent's responsibility even if it is done on the school device. Parents do not have the ability to configure the content controls on the device itself, so technically sophisticated parents resort to tweaking router settings.
That’s just not how the system works.
Each school brews its own system more or less.
> That’s just not how the system works.
ok then make it work so. i feel it's like this thought flow:
- A causes B. and B is a problem.
- why not do C which causes not B.
- ahh, this wont work because it's A what is now, and C is not.
They could actually in the 80s and 90s.
I feel uncomfortable about the idea of controlling children, even my own. Certainly there is a requirement to protect children from others but I feel like putting in guard rails to prevent children from themselves only leads to making things taboo and, as a result, more interesting.
That's objectively not true. You can just use your own DNS server and lock the settings app either through android mdm policies or an locker app.
No that doesn’t work - tried it.
Google is basically its own private internet. It caches content so you can access all sorts of terrible stuff just from Google.com (and its related domains).
But if you cut Google you cut Google Classroom - which is required.
And Google Classroom itself has many workarounds.
This isn’t just a Google problem. The centralization of the Internet around a few mixed content domains really kills conventional filtering.
Paradoxically, there are so many centralized domains that even if you can block one, it’s just a game of whack a mole.
Eventually you just block the whole internet - and then what’s the point? Take away the 20 most popular mixed content platforms, messaging, etc, and you’re effectively blocking the whole internet.
The kids can’t contact their friends, watch educational videos, or any other legitimate use.
> Classifying accounts as child accounts (moderated by a parent)
Notice also that even if you do this, you still don't need the service to be able to decrypt the content, only the parent.
This could even be generically useful, e.g. you have a messenger used by business and then the messages can be read by the client company's administrator/manager but not the messaging company's.
I don’t agree we should Treat everyone as an adult by default online. We wouldn’t do that in any other circumstances.
And the only reason it is permissible to presumptively treat people as underage until proven otherwise in the physical world is that there isn't a constellation of intermediaries collecting all your habits and preferences when you buy porno magazines or alcohol in person.
Why is the answer people seem to arrive at being "mandatory collection of blackmail material that will ruin careers and relationships" when it comes to the Internet?
Because "somebody has to think of the children." At this point, I am convinced parental instincts are being abused to slowly but surely install more-or-less complete public surveilance. Its a rather obvious approach. You have to appeal to something emotional. And parents, and their apparent unwillingness to take up responsibility for their parenting, are the perfect target. Even child-less adults will chime in to toot the "Somebody has to think of the children" mantra.
Why do people feel they can behave in a way that can be blackmailed.
Just behave
Yeah, how dare someone do or say anything that some random crazy asshole could use to threaten that person's personal or professional life or even put them in danger of physical harm.
To hell with gay kids growing up in very traditional religious areas in much of the world.
That person who made a racist joke on Discord when they were 13 years old? That should be able to ruin them when they're 30!
Someone confiding to a friend over social media DMs that they're in an abusive relationship with someone violent? Well - she shouldn't be surprised when her partner beats her within an inch of her life when he finds out. If only she did what she was told, right?
And let's not forget the cringiest or most sexual thing you've ever said online - make sure that your every utterance in private would pass scrutiny by your employer's HR department!
Seriously...I don't understand people like you. What a small, listless, and unusually safe world you must live in.
You may as well have asked why can't everyone think and act like you as well as live in your particular region of the world with the same friends, family, romantic, and professional opportunities that you've been provided throughout your life.
That doesn't work, unless the system knows everyone's family relationships.
Not guesses. Not is told about and takes on trust. Knows.
There's nothing to stop a kid creating a fake adult account and using it as an adult, perhaps creating their own kid account for "official" use.
Ultimately this is an unsolvable problem without a single source of truth for verified ID and user age.
The only responsible way to do that is to create a global "ID escrow" agency, where ID details are private and aren't available to governments or corporations without a court order, but the agency can provide basic age checks and other privacy services of a limited nature.
Good luck with that idea in this culture.
Meanwhile we have the opposite - real ID is known to governments and corporations, personal habits and beliefs of all kinds can be tracked, there is zero expectation of privacy, and kids still aren't protected.
> Classifying accounts as child accounts
It's ok to drive Dad's truck unless he catches you and tells you no.
Unfair presentation. What they suggested was more akin to, "Assume someone with keys is an adult, and let them start the truck."
Dad should either know his children would never drive the truck without permission, or keep his keys as safe as his wallet (and if he can't trust his kids with keys, you bet his wallet needs protection).
I’m actually okay with not letting under age people use e2e. I’m not okay with blocking everyone. I have 2 kids.
I'm not comfortable with the idea that children's private messages would be exposed to thousands of social media workers and government employees.
I understand the concern but then to make this available for adults you now have to provide proof of age to companies, which opens up another can of privacy worms.
Theoretically we don't actually need proof of age. Websites need to know when the user is attempting to create an account or log in from a child-locked device. Parents need to make sure their kids only have child-locked devices. Vendors need to make sure they don't sell unlocked devices to kids.
> Theoretically we don't actually need proof of age. Websites need to know when the user is attempting to create an account or log in from a child-locked device. Parents need to make sure their kids only have child-locked devices. Vendors need to make sure they don't sell unlocked devices to kids.
Given how current parental controls work, kids are not getting access if their device is under parental control (the default for open web access is off). So Facebook still won't see any child-locked devices, even before this ruling. My guess is that this ruling applies to parents who aren't making sure their kids get access only via child locked devices.
The actual problem is that there are parents, I even remember them growing up, who do not care what their kid is exposed to and won't flinch at anything. I'm sure most here had a "Jeff's mom" who didn't care if you guys were playing mortal combat while blasting Wu Tang at 9 years old.
So even if 95% of kids have responsible parents locking down access, there will still be this 5% that will continue to drip horror stories that motivate knee-jerk regulation.
Exactly.
Trying to approach it from the direction of websites determining if you are an adult is a privacy nightmare and provides a huge attack surface. (Which is what the government wants--the ability to monitor.) Flipping it over is much, much safer--but fails the real mission of exposing dissent.
(On-device security, the credential of the adult is loaded onto the device but not transmitted anywhere, it can only be obtained locally. The device simply responds as to whether it has a credential loaded. Bad guys are unlikely to want to sell such devices as the phone could be traced back to them.)
And the parents can select a strict child lock, or permitted but copies forwarded to the parent.)
Children do not want child locked devices and they will find alternatives
The issue is not just age verification but also device pinning.
I think the framework here is to have community driven age verifiers( i recall there is an EU effort for digital wallets which besides it's bad parts has some of these good parts) which can verify ages for people and link them to( local biometrically encrypted) devices for pinning. This would be privacy preserving. The only downside is a mandate for all devices have a built-in hardware biometric encryption like a finger/face print so phones can't be just(used) with these apps installed.
The verification part is a job that could be done by all the teachers and coaches and ofc parents. Any one verifying identities would be cryptographically nominated/revoked by a number of more senior members of the community. A prent always get the right to say ok for their kid ofc but so could teachers or legal guardians..
We(legally) need a mandate for smart devices to have local device only biometric verification. The law should be to have these apps follow device app store protocols.
As with smoking, alcohol, sex, drugs etc
Children who are smart enough to get access to a given vice without getting caught are more likely to be mature enough to be able to cope with that vice.
Sorry what?
Kids with low parental supervision who steal uncle Roy's marlboro are more likely to be able to cope with tobacco addiction?
Do you have any reasons to think this might be the case? Studies, research, a well thought-out article?
To get reliable access you either need to convince an adult to give you access (which is always game over) or you need to engage in some kind of future planning, which is a similar skill set as the one necessary to notice that getting addicted to cancer thing might be a bad idea. Stealing uncle Roy's marlboro doesn't work because uncle Roy is generally then going to notice that they're going missing and either start securing them better or deduce where they're going and visit some punishment on the kid.
I mean what if Roy doesn't care?
We're just optimising for kids with shitty family at this point.
If Roy doesn't care then you have a kid with an adult who gives them access, which is the scenario where none of this is going to work. Even if you required government IDs with hourly retina scans, it doesn't work if Roy is willing to let the kids hold the device up to his face whenever they want.
Sure, I agree.
I only disagree with the just-so notion that kids who have an Uncle Roy are somehow better able to cope with the consequences. Ability to access something is (IMHO) pretty uncorrelated with the ability to cope with the consequences.
The original claim wasn't that the kids with an Uncle Roy would be better able to cope, it's that the kids are who can devise another way to get past even if they didn't. Then the latter kids make up a larger proportion of the ones who can get past because they have two paths to do it instead of one. And the former ones are the ones we can't reach regardless.
Let's look at that original claim -
"As with smoking, alcohol, sex, drugs etc
Children who are smart enough to get access to a given vice without getting caught are more likely to be mature enough to be able to cope with that vice."
There are at least two problems here. The one I've focused on first that you seem so keen to dispel, is an assumption that there are smart kids overcoming a challenge. 'Roy' is an extreme, but there is a whole spectrum of low-oversight conditions that are likely to lead to kids getting access to alcohol, tobacco, drugs, having sex etc, which are nothing to do with smartness or challenges and are much more to do with shitty parenting and neglect.
Then there's the second problem. Let's focus on tobacco but I believe it's likely to hold for other drugs - even if we allow that children getting access to tobacco are 'smarter' than those who don't figure it out, and are overcoming various obstacles, that doesn't actually imply that they'll be better able to deal with the consequences. Just like how a high IQ doesn't always mean someone is necessarily good at crossing the road safely or tieing their shoelaces.
In fact there's a variety of research about nicotine's effect on developing brains and how the earlier people are exposed the more likely they are to be more addicted for longer. This is the opposite outcome to the original claim, kids who start earlier are in fact demonstrably less likely to be able to 'cope' with the vice.
The whole claim is nonsense.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC3615117/ [1] https://www.tobaccoinaustralia.org.au/chapter-6-addiction/6-...
(edit - I'm not making specific claims about cybersecurity or access to tech here, I just think the analogy is pretty seriously wrong in itself)
> The one I've focused on first that you seem so keen to dispel, is an assumption that there are smart kids overcoming a challenge. 'Roy' is an extreme, but there is a whole spectrum of low-oversight conditions that are likely to lead to kids getting access to alcohol, tobacco, drugs, having sex etc, which are nothing to do with smartness or challenges and are much more to do with shitty parenting.
Let's consider the four combinations of the two variables here. You have dumber and smarter kids, and worse and better parents. The kids with the worse parents will have access to the vice regardless of whether they're dumb or smart, but the kids with the better parents will only have access if they're smart enough to figure out how against parents actively trying to prevent it. Therefore the two of the four quadrants with smarter kids can get access but the dumber kids only can when they have worse parents, implying that two thirds of the quadrants with the ability to do it are the smarter kids.
> even if we allow that children getting access to tobacco are 'smarter' than those who don't figure it out, and are overcoming various obstacles, that doesn't actually imply that they'll be better able to deal with the consequences.
That's assuming the way they deal with it better is by trying the drug and then somehow not getting addicted rather than by choosing not to try the drug to begin with even though they could access it if they wanted to, or otherwise making more measured choices if they do decide to try something, like finding a source more likely to be providing the expected amount of the expected substance instead of who knows how much of who knows what. Or just hesitating a while so their first time comes at an older age.
> implying that two thirds of the quadrants with the ability to do it are the smarter kids.
But only one of those involves overcoming anything.
And unless you have information on the relative sizes of those quadrants, it’s meaningless in terms of the overall picture and being able to confidently assert that access to such contraband allows you to draw any inferences about intelligence whatsoever.
And the rest appears to be some serious mental gymnastics to avoid the point, which I don’t believe for a second was meant to encompass “children who are smart enough to get access to do a thing but don’t actually do the thing because they’re so damn smart”. Nor do I believe that 14 year olds who find a willing drug dealer are more likely to take sensible precautions than their peers, having proven their smarts by finding one!
The whole premise is laughable.
I think we’re going to see how that plays out with gambling.
It seems a bit silly to think security abstinence is the solution.
Well then don't give them money to do so, its not like phones grow on trees. If you make selling phone/internet device to a minor under certain threshold an illegal act severely punished by law in same way alcohol and cigarettes are, many cases of access are solved. Also, paid internet subscription doesn't grow on the trees even though there are free wifi networks.
All imperfect solutions, but they slice original huge problem into much smaller chunks which are easier to tackle with next approach.
True, it's never going to be 100%, but at least it's a tractable problem for parents. Enough to change what the culture considers "normal," anyway.
Imperfect solutions are still called "solutions".
Theoretically only
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13.
https://archive.ph/y3pQO
[flagged]
I believe Zuckerberg has a term for people who willingly break online anonymity because someone with a domain name and website asks them to.
Establishments don't record my data or even take down my name. They take a look at the birthdate and wave me forward.
We need a way to do this online.
> Establishments don't record my data or even take down my name.
What are you talking about. Have you really never rented a car before?
Some establishments, as part of their business practice, require identification.
And many don't. Bars, nightclubs, liquor stores, tobacconists, R-rated movies.
We don't see people worried that bars, nightclubs, liquor stores, tobacconists, R-rated movies asking for age verification will slip into requiring names too.
It honestly looks like an emotional panic. People who take seriously slippery slopes aren't to be taken seriously themselves.
Social media is like e-cigarettes in the sense that the shift toward nicotine salts (think Juul) around 2015 resulted in e-cigarettes becoming more dangerous and thus more age-restricted.
It's also like consumer credit cards. Remember that in 1985 Bank of America just mailed out 60,000 unsolicited credit cards to residents of Fresno, CA without application, age verification, or identity check. They just landed in people's mailboxes, including those of minors. Eventually a predatory lending industry developed and we increased the age and ID requirements. My point is that systems can, and do become more dangerous overtime. Not all, but not none.
Algorithmic feeds, online advertising, and attention engineering are the nicotine salts of social media. The product's changed, so should the access.
>We don't see people worried that bars, nightclubs, liquor stores, tobacconists, R-rated movies asking for age verification will slip into requiring names too.
Do we not? Sellers often don't just look at IDs now, they scan them into their system, and naturally, keep and sell your identity info, purchase data, and anything else they have access to.
>Algorithmic feeds, online advertising, and attention engineering are the nicotine salts of social media. The product's changed, so should the access.
This basically makes it clear. The problem is not that children are on social media. The problem is that "social media" has been allowed to become a platform for exploitation and manipulation by their owners. Adults aren't free from this either.
Digital age verification laws I've read also literally specifically ban recording that information, unlike in person. People were arguing with me that companies would decide they need to retain that info for audit purposes when there are no audit requirements and when it's illegal to store it for any reason.
> People who take seriously slippery slopes aren't to be taken seriously themselves
> Eventually a predatory lending industry developed and we increased the age and ID requirements
I have no idea if you're arguing for or against verification. You dismissed the idea that age verification is a slipper slope to more stringent ID requirements. Then provided an example where the exact opposite happened.
I'm not arguing that social media will get worse, I'm arguing that it has gotten worse. A slippery slope argues that something will happen. I'm pointing out that it has happened. Huge difference.
Even more, my point is that rules, regulations, and requirements adapt when these changes become unbearable. That has happened with social media, therefore a change in rules, regulations, and requirements is deserved.
I have kids. I don't want creeps and predators spying on their conversations with friends.
https://web.archive.org/web/20210522003136/https://blog.nucy...
That's true, I didn't consider that
In a way, this is like saying that one trusts total strangers in some random large tech company and total strangers in government agencies to read and/or manipulate conversations that kids have. This also paves the way to disallow E2EE for other classes of people based on arbitrary criteria. I don’t believe this is good for society overall.
The reason we are having this discussion, is because the private route worked up to a point.
Firms have a fiduciary duty to shareholders and profit.
On the other hand, You ultimately decide the rules and goals that operate government organizations, and do not have a profit maximization target.
They aren’t the same tool, and they work for different situations.
The E2EE slippery slope is a different challenge, and for that I have no thoughts
The problem is all these ‘for the children’ arguments contain collateral damage.
And the effectiveness for the stated goal is also often questionable.
Well, the problem is that the “don’t do it” arguments have children as the collateral damage.
We are at a point where we are picking and choosing collateral damage targets.
It does seem like it could potentially be used to enforce mass surveillance over the people of the United States
Alphabet can grep your emails, Amazon has literal microphones and cameras in most peoples houses
That ship has sailed
Yes google analyzes everything you upload to it and if it finds a violation will report to the proper gov agencies.
It is actually terrifying . If you write something out of context or upload an image out of context you can be in big trouble.
You just need to provide the government with your name and address and the name and address of the counter party every time you send an encrypted message.
If you don't support this you're obviously a pedo nazi terrorist.
Meta is one of the worst offenders here. They are actively lobbying at least the US Congress for laws that require age verification at the hardware/os level.
There is no reason kids should use so called smart devices, except making certain companies richer. Kids have had a healthy development without such crap for thousands of years. We don't discuss what percentage of alcohol should be allowed in beer and wine for kids.
The French (watered wine) and British (shandies) do.
Centralized organizations with proprietary software can never offer meaningful end to end encryption because they can just ship an app update to disable or backdoor it at any time.
It is better for them to be forced to turn off the security theater so people that need actual privacy can research alternatives.
well, name an example of a thing that can never change then.
"research alternatives" meaning what exactly? You think open source is somehow not susceptible to the same issue, plus all of the malicious updates?
Security focused FOSS does signed commits, signed reviews, full source bootstrapping, and reproducible builds.
Proprietary software solutions are unable come close to that level of accountability.
Not all published source code is secure but all secure software has published source code.
Rock meet hard place?
Harm to kids is actually happening, and this is always going to be a hot button topic.
E2E is critical for our current ability to communicate online, but will be a lower priority when pitted against child safety.
Fighting the good fight is one thing, fighting for the sake of it, without a plan that addresses the tactical reality is another altogether.
Personally, I think E2E will be defended, but it’s becoming a lightning rod for attention. As if removing encryption will solve the emerging issues.
I suspect providing alternatives to champion, such as privacy preserving ways to verify age, will force a conversation on why E2E needs to go.
This is the core issue.
We know that this isn't really going to reduce harm for children, we know Meta is not seriously going to suffer or change, and we know this is going to be used as a cudgel to beat down privacy and increase surveillance.
Why is it so important that kids have access to the internet anyway that we're willing to sacrifice both our privacy and freedom of speech rights for it when we already know it's damaging their mental health?
We don't need all this privacy invasion if we just didn't give kids a smartphone with a data plan.
School basically requires them to have one. So.
On school computers maybe, but not on their smartphones. If schools actually require kids to have their own computer with internet then that's quite simple to fix by enacting new rules. That's important for access to education as well for kids from low-income families.
> Many will cheer for any case that hurts Meta
Absolutely. Particularly where they've been found to be guilty.
> but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption
Why _social media_ companies are backtracking. I'm extremely nonplussed by this outcome.
> concerns that allowing teens
Yes, because that's what we all had in mind when considering the victims and perpetrators of these crimes.
The lawyers using the finding badly internally doesn’t mean the finding was fundamentally unsound and or won’t ultimately be a positive thing.
As a platform operator I think end to end encryption does no good in free products. It just makes you blamed for liability that you couldn’t foreseen or mitigate.
It's illegal to hand a minor harmful material. Meta did exactly that. I support people's rights to make and buy sports cars, But it is illegal to hand the keys to a minor and leave them unsupervised.
If someone sends a child a dick pic by physical mail, is the post company responsible?
This is a good thing for “social” media. If you use any social media app (especially those owned by Meta) you should assume that absolutely everything you do is for full public consumption. Maybe these changes will make everyone stop thinking that anything is private when using “social” media apps.
> (my emphasis) Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
Whatsapp and messenger are still fine, then.
The Clipper chip is coming back.
How is the Clipper chip different from what online platforms claim have: a curated kids only section?
In the mid-90s the US government proposed that Clipper be used as the universal encryption standard for secure electronic communications in the civilian realm, all other cryptosystems being presumably forbidden. It was based on the idea of key escrow: that all Clipper keys be held in an archive and law enforcement could recover a copy of the encryption key for any given Clipper chip upon providing legitimate authorization to intercept communications. However, the Skipjack protocol used by the chip was buggy and insecure, and consumer CPUs became powerful enough that military-grade encryption was practical in software, rendering Clipper moot. A series of First Amendment rulings protected the proliferation of such software cryptosystems under the rubric that computer program code was protected speech.
The Meta ruling gives the government an effective stick, First Amendment notwithstanding: if you facilitate communication that the government cannot break into, and someone abuses a child with help from your secure platform, you could be liable for contributing to the abuse of that child. A safe harbor from liability will be provided—by adopting key escrow based encryption (if you support encryption at all). This does not interfere with protected speech about cryptosystems, but it makes using cryptosystems difficult in practice due to the chilling effects.
I’m so sorry, I confused the Clipper Chip with the V-Chip.
The 1990s were interesting times.
No. Meta is backtracking because the business case for and to end encryption is gone. They willingly will give the Trump administration whatever the want because they are not in the business of fighting authoritarian governments, they are in the virtue signalling business when governments are constrained by the rule of law.
The business case was to be able to say “we don’t know”. That case is gone.
Only accounts that exist 14 year plus are elligible for e2e?
So a new service can't offer E2EE for 14 years?
Also, so an aspiring pedo who gets a job at the service can now read the messages of all the underaged kids?
However did we survive all of these years with unencrypted SMS or voice calls?!
[dead]
Is it illegal or is it just illegal on general purpose platforms whose focus isn't extreme security?
We all know Meta can still read E2EE chats (otherwise they wouldn't do it) and they're using E2EE as an excuse to avoid liability for the things their platform encourages. Contrast this with something like Signal where the entire point is to be secure.
> We all know Meta can still read E2EE chats
That can't be true, otherwise in what sense is it E2EE?
In the sense that calling it E2EE gives people a warm fuzzy feeling and makes people send more sensitive information over the platform.
Has anyone actually audited it?
Probably their auditors? Lying about this would be tantamount to (very serious) securities fraud. Not sure what you're basing on your allegations on besides "trust me bro"
Why would lying about having E2EE be securities (as in stock market) fraud? Would that make any lie ever told by a corporation equate to stock market fraud?
Yes! As Matt Levine says, “everything is securities fraud”
So if Microsoft tells me upgrading to windows 11 will make my computer better, you think that's securities fraud?
Did I say that?
Yes. You said everything is securities fraud
[dead]
Well, I've seen services describe having "E2EE" where one end is your computer and the other end is their server, so...
The metadata is still unencrypted. That also reveals quite a bit.
I mean you can read it in your app and they're not just stored on your phone. E2E just means in transport from what I understand.
E2EE means end-to-end, where the ends are the participants in the chat. They can read it on your phone, but not on their servers. They need their app to separately transmit the plaintext to their servers to read it.
So literally what I was implying
Which is technically possible.
The first two E's in E2EE stand for end. From one end to the other. So no, Meta can't. Or put another way... if they can read those messages, then it's not E2EE.