Maybe I'm just getting old and cynical but, while I think current social media is bad for children, I'm very suspicious of the current international agreement that it's time to take action, especially with all the ID verification coming from multiple avenues
Two things can be true, and I am in the same boat. Should the next generation have their brains fried by ad-tech corporations and their algorithms? Absolutely not. Should the overdue off-ramp from this trend be the on-ramp to mass-surveillance and government overreach? Also a firm no.
I really wish this take was more prominent. I really don't buy that mass-surveillance should be required for age verification. There are plenty of very smart people who have created much more complicated things than a digital age verification that doesn't track every time you use it.
This also isn't helpful, but I think the sudden push of urgency isn't helping. The internet has existed without any kind of age verification or safety measures for about 30 years. We could have used that time to have a sensible conversation about policy trade offs, but instead we've waited till now to decide that everything has to be rushed through with minimal consideration.
You don't even need to go all high-tech with it: Children, by nature of being children, aren't going out and buying their own smartphones and computers. When Mom and Dad buy the device for their kid, just punch in the kid's age before handing it to them.
That's the flow that California's age verification system uses. Personally, I'm opposed to any age verification beyond the current "pinky promise you're 18" type deals, but California's is the least intrinsically offensive to me.
> When Mom and Dad buy the device for their kid, just punch in the kid's age before handing it to them.
Doing this doesn't accomplish anything in terms of protecting children from the harms of the internet. In fact it feeds your child's age to marketers and child predators.
Every website will get to decide how to handle the age data our devices will now be supplying them. In the case of facebook, it's not as if they had no idea the children endlessly posting selfies and posting "six seven" on their service weren't adults. Facebook was 100% aware that the children using their service were children. They knew what schools those kids went to, who their parents were, which other kids they hung out with. Facebook knew they were children and they took advantage of that fact.
The law California (and other states) passed doesn't define what content has to be blocked for which ages and doesn't give parents any ability to decide what content their children should or shouldn't be allowed to see. It takes control away from parents. As a parent, I might think that my 16 year old should be allowed to look up information on STDs but the websites that collect my child's age could decide they can't and I'll have no say in it.
> The law California (and other states) passed doesn't define what content has to be blocked for which ages
No, but it's a framework that would allow other laws to do so. Because...
> it's not as if they had no idea the children endlessly posting selfies and posting "six seven" on their service weren't adults.
...you can make statements like that which sound like common sense, but it would be incredibly hard to regulate based on "if you know, you know" (or "you should have known"/"you had to have known"). The law has to provide (guarantee) a way for them to know in order to actually require them to take action based on it.
> As a parent, I might think that my 16 year old should be allowed to look up information on STDs but the websites that collect my child's age could decide they can't
This is a different problem. It sounds like you're essentially wanting to guarantee access to certain things, not just for your own 16-year-old, but for everyone else's, too (because if it was just yours, you could look it up for/with them if necessary). It'd be difficult to compel businesses to provide services to audiences they don't want to. But again, that's a separate problem that doesn't necessarily conflict with the rest of the system.
> No, but it's a framework that would allow other laws to do so.
I worry that's it's the start of a lot of "other laws" which will limit the ability for children and adult's to maintain even pseudo-anonymity online.
> The law has to provide (guarantee) a way for them to know in order to actually require them to take action based on it.
That sounds like an argument for even stronger proof of age than what the law calls for. Online platforms should do what nearly every other publisher does and provide a rating for their content. Netflix doesn't need to know how old I am. They provide a "kids" profile populated with their own curated content if that's the kind of thing I want and for everything else they provide ratings (PG, R, TV-14, etc.) It would be easy enough to push a rating to clients, they could even use HTTP headers for it. If lawmakers really felt the need to interfere in all of our operating systems it could require some means to collect and act on those ratings.
> It'd be difficult to compel businesses to provide services to audiences they don't want to.
This is the norm. It's what every business does apart from those who demand ID for every transaction. It's useful for businesses to give people their opinion or intention for who they're targeting, but it's entirely inappropriate for every website and online service to force their opinion onto others. They aren't qualified to know what's appropriate for a specific child and platforms like facebook have repeatedly demonstrated that they absolutely can't be trusted to put our children's interests above their own.
> Online platforms should do what nearly every other publisher does and provide a rating for their content.
That only happens to "publications" of particular forms where state regulation has mandated it, or enough noise was made about state regulation mandating it (or simply censoring content) was made that the industry adopted a rating system as a way to discourage that (and in the latter case, there are always plenty of publishers that don't make use of the industry rating system, either at all or at least for selected publications in the field to which the ratings nominally apply.)
> They provide a "kids" profile populated with their own curated content if that's the kind of thing I want and for everything else they provide ratings
Netflix does not provide ratings for "everything else". Most of what they carry has either MPAA or TV Parental Guidelines ratings, and if it has such ratings they provide them. But they have content which does not have such ratings, which is simply noted as not being rated. (Of course, if "not rated" as an option is a valid to comply with your "you must have ratings in an HTTP header" law HTTP header, then it is trivial to comply and provide the "not rated" header for every piece of content, but this doesn't actually achieve anything.)
> Online platforms should do what nearly every other publisher does and provide a rating for their content.
That's fine, but it needs an enforcement mechanism, or we're back to where we currently are ("click here if you're 18").
> It would be easy enough to push a rating to clients, they could even use HTTP headers for it. If lawmakers really felt the need to interfere in all of our operating systems it could require some means to collect and act on those ratings.
I would completely agree it seems reasonable at a glance to have websites push ratings and have the enforcement be done e.g. at the web browser level (with the web browser knowing how to enforce based on the OS's supplied age bracket), rather than making websites read the age bracket and act on it directly. Although it does still run into questions about how you handle websites with content from multiple brackets (like Reddit or X)-- what's the UX supposed to look like if a child attempts to access adult content on one of those platforms? If the platform can't know what's happening (due to your privacy/safety concerns), then you're limited to the web browser entirely breaking the interaction or somehow redirecting them somewhere else.
> That's fine, but it needs an enforcement mechanism, or we're back to where we currently are ("click here if you're 18").
It'd be dead simple to tell if a website returned a rating or not, just pull the http headers and if it isn't there fine them or warn them first and then fine them or whatever. You could even have browsers just refuse to load pages that didn't include a rating header in their response and enforcement would take care of itself.
> it does still run into questions about how you handle websites with content from multiple brackets
I think it'd be up to reddit (or mods) to either set ratings for each subreddit and moderate accordingly. Pages at /r/MsRachel/ would return a different rating than /r/watchpeopledie.
Same with twitter I guess. Every user can specify if their account was intended for children or not. Elmo's twitter account would be shown to everyone, while accounts that don't intend to self-censor wouldn't.
> what's the UX supposed to look like if a child attempts to access adult content on one of those platforms?
browsers that detect a rating higher than authorized can just throw up an about:blocked page telling kids to talk to their parents for access to the page they wanted or click the back button to return to the page they were on.
The platforms would see that a page was requested, and they'd transmit the data to the client along with the rating header. They wouldn't get any signal that the page was blocked. It'd look no different on the server side than it would if the user had clicked a link and then closed their browser/tab/window. If you wanted to be sneaky, you could actually have the browser load the page in the background to avoid platforms guessing between a closed tab and blocked access.
This not only solves the privacy/safety concerns, most importantly it puts parents back in control of what their children can access. Parents would even be able to run software that would log the times/urls of blocked pages, and let them override a rating based on URL or domain. Parents could block roblox.com even though it returns a "for kids" header if they didn't want their 8 year old playing in an ad infested online pedo playground but still allow their mature 10 year old access to plannedparenthood.org even though it has an adult rating without exposing them to adult everything else on the internet.
There are countless better alternatives to what facebook wants us all to be subjected to, but facebook couldn't care less about our interests they are only looking out for themselves and lawmakers are happy to take their bribes and eager to erode our ability to browse without an ID attached to our every action.
>used that time to have a sensible conversation about policy trade offs,
On HN itself, no way. Too many people here make far too much money on ads to want that. It seems the other part that want freedom also want so much freedom it gives huge corporations the freedom to crush them.
>things than a digital age verification that doesn't track every time you use it.
The big companies that pay the politicians don't want that, therefore we won't get that.
> On HN itself, no way. Too many people here make far too much money on ads to want that.
Ya know, this might explain why the warnings seem to fall on deaf ears here.
New favorite person on the internet.
> We could have used that time to have a sensible conversation about policy trade offs [of age verification]…
There is always a conversation, but it is often not the popular one and gets drown out by whatever everyone is excited about at the moment. You can find it if you seek it out.
Lawrence Lessig’s book “Code” (1999), for example, talks about how a completely unrelated internet is an anomaly, and that regulation will certainly be necessary, and advocates that it be done in a thoughtful manner.
It's not about doesn't - the government can always claim that it doesn't track you. That is unlikely to stay true.
It's really either they can't track you or they will track you.
Best time to plant a tree: 30 years ago.
Second best time to plant a tree: now.
A Kindernet would solve many problems. Hardware-gated access, local moderation and control, zero commerce or copyright, whatever you want to do to make the environment uninteresting to bad actors. Frame opposition to the concept as demand for access to your children.
Absolutely: I said something similar recently: https://news.ycombinator.com/item?id=46766649
Exactly. There's a clear alternative in my mind, one I'm sure is objectionable in its own way but I think is the least evil of the three: require providers to label their content and make them liable for it. This allows parents to do the censoring, which is functionally impossible now because no parent can fight the slippery power of multibillion dollar software investments designed to prevent them from having control over what their kids see.
So you're saying these corporations are responsible for verifying the age of their users without verifying the age of their users?
I'm not saying that, nor did I allude to it in any way. I made no assertions as to what the solution should be.
The ideal scenario would be everyone choosing not to engage with these predatory platforms. Going from there, the right question to me is what steps we have to take as a society for that to become even remotely realistic and, subsequently, what role governments can or should have in that.
For starters, I would be in favor of fines that actually hurt the bottom line instead of this "cost of business" bullshit. We have handed these corporations unprecedented access to and control over our lives, to the point that they erode democracy and the social fabric itself. The inevitable abuse of that power when it comes with barely any strings attached needs to be punished in a way that makes it unattractive as a business model at the very least.
Instead of lowering the attack surface by locking out kids, and in turn introducing mass surveillance which at best also lends itself to abuse, the root issues of ruinous greed and lack of accountability need to be addressed. The whole concept that there is no price too high for profits needs to burn. Social media is just one of the more recent manifestations of it.
They’re the oil barons of our day. They frack our data and output psychological/social pollution.
That's because we should be regulating the social media industry rather than regulating social media users.
Unfortunately, social media users don't have billions of dollars to spend on lobbying and related activities around the world.
> That's because we should be regulating the social media industry rather than regulating social media users.
These lawsuits and regulations are against the industry, not the users.
The regulations and lawsuits are driving the pressure to ID check users and remove end-to-end encryption.
The ask is to treat users differently based on age. How can they do that without verifying their users age?
we should be removing the harmful aspects of modern social, which are harmful for everyone not just minors, by making them unprofitable or even outright illegal.
Instead we are saying "only adults should use this" which, while technically regulating the industry, places the restriction on users.
We're treating it like tobacco or alcohol (2 industries who have similarly spent millions upon millions of dollars in lobbying efforts) but we should be treating it like asbestos.
OK, so what would be in the text of this law making it enforceable and not easily game-able by the social media companies and without severe unintended consequences?
Why are you asking lawmaker questions of people on HN? What kind of answer are you expecting?
Just because I don't know how to write a law that can prevent it doesn't mean that I can't recognize an actual issue when I see it.
Because people like you then go and vote for politicians without actually understanding what they are proposing.
It's all Trump style "believe me I know how to fix it" and you will vote for the person that pushes your buttons regardless of whether they have a plausible solution or not.
So only lawyers should be allowed to vote, otherwise we are subjected to ad hominem attacks?
Lack of an informed populace has gotten us the government we have today in the US. I think we can do better.
It very much seems like you think you could do better. Not being a lawyer does not make someone part of an uninformed populace.
So everyone should go to law school?
You honestly think facebook has no idea that the children using their website are children? The combination of the children's selfies, social network, GPS coordinates, and posts make it very clear. Facebook already knows who the children are and they've been explicitly targeting them accordingly.
You want people to be kicked off the internet because they have a baby face? You think the law should mandate the use of an imperfect facial recognition system?
I think that facebook has been using facial recognition on every photo uploaded to their platform for a very very long time and that they already use that data in part to determine the age of users. Facebook hasn't been kicking anyone off the internet because of that data so far. Instead facebook just targeted the users they decided were children as children.
Forcing the users to verify their age changes nothing. It gives the illusion of "doing something" but it just gives facebook data they already had. What's still needed is regulating social media platforms themselves to place explicit limits on what they can do to hurt their users, including children.
Meta spent $2bn lobbying for this ID verification stuff:
https://news.ycombinator.com/item?id=47361235
> I'm very suspicious of the current international agreement that it's time to take action
Especially since, when you look at the behavior of younger people, they're way more careful about social media than millennials were. My teenage child an their friends keep all of their conversations in a massive but private group chat. Any social media consumed by them, is basically 'read only'. They don't post online, none of them of have social media accounts where they post pictures of themselves etc.
Same with all of my younger gen-z coworkers. If they have socials the post very selectively and all content is work friendly.
The people I see that need "protection" are aging millenials that don't really understand how wildly they're exposing themselves and families. I cringe when I see the amount of personal photos and information shared by the view millenials I know who still need their ego-boost from these platforms (and that number itself is much smaller).
Younger people don't share their opinion and anything resembling private photos online any more.
I definitely would not agree with this and the user metrics of platforms like tiktok and instagram definitely would argue otherwise to your anecdote. Many are showing far more of an alleged window to their lives than ever before, key word being alleged as its always greatly curated in an way that oft attempts to make everything look perfect and effortless.
Absolutely are a lot of gen z who avoid social media, but to pretend most are privately hunkered away is completely ignorant of today's social media usage.
given that it's happening simultaneously with the war on E2EE and general purpose computing, their goals are as transparent as it gets. the West is at this point only a decade behind China.
Governments always want censorship and speech control. That never changes. The only difference is that now the general populace has accumulated enough disgruntlement to social media to be used against themselves.
No the difference is that when governments are still constrained by the rule of law it’s cheap PR to fight the government on data access claims but once they are authoritarian fascist industrialists fall over themselves to feed everything into Palantir
There's no agreement other than maybe that social media is bad for children. To get kids off of there you need to identify who's a kid and who isn't. Same with alcohol and tobacco. Obviously people shouldn't give their ID to Meta and hopefully many will not but those that do, for me, as someone who doesn't use social media, that's a small price to pay to keep kids off. Again, Meta is completely optional, it's a platform to share stupid videos, no one NEEDS to be there.
I’m deeply worried by how uncritical these responses are. Meta is removing end-to-end encryption specifically because these lawsuits are trying to claim end-to-end encryption is a tool for child abuse.
The “think of the children” angle is the perfect angle to pressure companies to make communications readable by the government. And here tech audiences are welcoming it and applauding because they couldn’t read past the headline and they think anything that hurts Zuck is good.
How anyone can see this happening and not draw the connections to Discord and other services also pushing ID checks is beyond me. Believing that this will only apply to services that don’t effect you is short sighted.
A lot of the ID verification stuff is coming FROM those companies
I’ve just been stung by iOS 26.4’s implementation of the age-gate. My only option has been to rollback with a 26.3.1 IPSW.
I unlurked and made a thread last night, but I think it might be hidden due to account age: https://news.ycombinator.com/item?id=47511919
Yep, your post and this comment were hidden. I vouched for them so they're visible now. Good luck!
[dead]
Meta is lobbying to push age verification to the OS level.
I have read the OSINT report from Reddit. The data it has is being interpreted as Meta orchestrating a global lobbying scheme.
However the data is equally if not more supportive of Meta simply taking advantage of global political sentiment to position itself better.
I’ve mentioned this elsewhere, but the HN zeitgeist seems to be resistant to the idea that tech is the “bad guy” today.
I work in trust and safety, and have near front row seats to all the insanity playing out today.
Do you think Meta wouldn't want to be legally mandated to ask for your id? The improvement to ad targeting alone would be enough to pay for any lost users. They would probably want nothing more than to be in the same business as idema and the other online identity/age verification providers are.
Critically think about this for a second before believing some ChatGPT generated "OSINT" report on reddit. Otherwise, you'll allow corpos to use your mob hatered against you
I think that report has multiple issues, but it’s currently popular and people are fond of blaming meta.
Even your point - meta is not after mandated IDs, but they see the way public opinion is moving and are using it to their tactical advantage. They are lobbying to push the regulatory burden on app stores and operating systems.
[dead]
because it is a false dilemma
[dead]
[dead]
Tech bros deliberately made digital crack for kids and corporations refuse to moderate online content.
There is no conspiracy the general public is faced with a crisis and they are desperate for a solution.
The teen suicide statistics do not lie.
> The teen suicide statistics do not lie.
Teen suicide rates in the US are lower now than they were in the 1990s.
This doesn’t paint the entire picture. Suicide rates peaked in 1990 and then declined to its lowest point in 2007 from there the rates started rising again.
Like all metrics, they fluctuate over time. But they've remained pretty for decades stable at around 10 per 100k per year. The recent rise doesn't really coincide with social media adoption. By 2008, >80% of teens were using social media. If social media adoption was driving the increase in suicides, we would have started to see a rise in suicides around the early 2000s, reaching it's peak around 2008. But that adoption of social media by teens was coupled with a decrease in suicides. The more recent rise in teen suicides occurred during a period of largely flat teen social media adoption (because nearly 100% of them were already on social media by the end of the 2000s).
This idea of teen suicide painting a clear picture about the impact of social media just isn't borne out by the data. And lastly, people ought to remember that teens have the lowest rate of suicide among any age cohort.
> If social media adoption was driving the increase in suicides, we would have started to see a rise in suicides around the early 2000s, reaching it's peak around 2008.
I think there is a logical fallacy here. Social media has not remained stable since 2008. For one thing, 2008 social media used the chronological timeline. For another, it didn't show "recommended" (or sponsored) content in your feed. There was no TikTok. Facebook was relatively new and MySpace was not even really feed-based as I recall.
Facebook moved away from chronological timelines as default in 2011. YouTube added "recommended" videos tab in 2007.
Right - but these were also not "hard cut" dates. They are a couple simple examples of the evolution of social media that continued (and continues) to occur.
The platforms continue to optimize for engagement (i.e. addiction.)
There is a claim that it's not social media on its own, but social media on smartphones that's responsible for a decline in child/teen mental health.
The world is bigger than the US.
Anyway you can go on HN and deny there is a problem but you will lose public opinion and crucially the voting booth.
The fine was levied in a US court.
The general public is being told they are faced with a crisis. This has been a problem for at least a decade, yet suddenly it's at the forefront and conveniently ties into ID verification for everyone to use general purpose computing.
I'm sorry but if you don't think there's a conspiracy I have a bridge to sell you. It was already unveiled that Meta has lobbied billions towards promoting this legislative change
> The general public is being told they are faced with a crisis.
> This has been a problem for at least a decade.
I get you're point, but anyone that doesn't is asking "Which is it?"
I think everyone can see there is problems. Is there a crisis? I don't think so. Same problems we've always had, but on a computer.
People that know tech, know these laws cross a MAJOR line. Not a little slippery slope thing, this is off a cliff. But I don't think most people, that are already used to having to sign in with an online account on every device they use, even their TV, see it as that big a step. They don't even realize how predatory it is that they are required to sign in. What they need to see is that the sign in requirement was a choice by the vendor. These are LAWS, demanding no one ever be given the choice to not reveal personal information about themselves to use ANY computer. That's the point that needs to be driven home.
You're arguing there's a conspiracy, but even if there is, what is the best action for governments to take given the devastating impact social media has been demonstrated to have on young people especially?
I don’t know what the solution is, but introducing mass surveillance of ALL users on their own devices hurts the general population - do you think it will solve the problem?
Oh hell no!
Its been decades of work to even get social media to court.
No one wants to talk about this or look at the issues when it’s not sexy.
$@&$$ - I’ve been at conferences and had safety teams cry on my shoulder about how THEY don’t get engineering resources if they ask for it.
Tech platforms suppress so much research and hold so much data hostage, that an entire research coalition based on independence from tech.
Zuck and tech as a whole pivoted to drop safety investments the moment this government came to power.
And this is for user in frikking America !
The shit that is going down in the rest of the world is a curse. The sheer amount of NCII that exists, with zero recourse for people whose lives are destroyed is insane.
> Zuck and tech as a whole pivoted to drop safety investments the moment this government came to power.
I think the question to ask here is, if both Meta and the current administration don't care about child safety, why is the age verification stuff going so smoothly? Is helping them do this really the right move?
Well it’s not going smoothly. People on HN are talking about it now, but they are really talking about privacy.
For the rest of the world this has been brewing for more than a decade.
Australia was the actually the one to tip the first domino. This is just a US state verdict on willful harm by a firm. Its not even about age verification.
For meta, shifting regulatory burdens to OS / app stores, reduces regulatory burden.
For governments, part of it is actually trying to come to grips with an impossible safety imperative and another part of it is happy to gain more control and power.
The power grab needs to be curtailed, and the people actually trying to help kids need better technical solutions.
Really? You still think you're the one looking at it all wrong? It's exactly what you think it is. Stop giving blatant malice the benefit of the doubt, especially the doubt they've directly instilled.