What's important is that we blame the victims instead of the corporations that are abusing people's trust. The victims should have known better than to trust corporations
What's important is that we blame the victims instead of the corporations that are abusing people's trust. The victims should have known better than to trust corporations
Right, both things can be wrong here.
We need to better educate people on the risks of posting private information online.
But that does not absolve these corporations of criticism of how they are handling data and "protecting" people's privacy.
Especially not when those companies are using dark patterns to convince people to share more and more information with them.
If this was 2010 I would agree. This is the world we live in. If you post a picture of yourself on a lamp post on a street in a busy city, you can't be surprised if someone takes it. It's the same on the internet and everyone knows it by now.
[dead]
I have negative sympathy for people who still aren't aware that if they aren't paying for something, they are the something to be sold. This has been the case for almost 30 years now with the majority of services on the internet, including this very website right here.
People are literally born into that misunderstanding all the time (because it’s not obvious). It’s an evergreen problem.
So you are basically saying you have no sympathy for young people who happen to have not been taught about this, or been guided by someone highly articulate in explaining it.
Is it taught in schools yet? If it’s not, then why assume everyone should have a good working understanding of this (actually nuanced) topic?
For example I encounter people who believe that Google literally sells databases, lists of user data, when the actual situation (that they sell gated access to targeted eyeballs at a given moment and that this sort of slowly leaks identifying information) is more nuanced and complicated.
It is taught in schools that everything you post online is public.
That explains why ISPs sell DNS lookup history, or your utility company sells your habits. Or your TV tracks your viewership. I've paid for all of those, but somehow, I'm still the product.
Tbh, even if they are paying for it, they’re probably still the product. Unless maybe they’re an enterprise customer who can afford magnitudes more to obtain relative privacy.
I paid big $$ for my smart TV, yet I still feel like I'm the product :(
Modern companies: We aim to create or use human-like AI.
Those same modern companies: Look, if our users inadvertently upload sensitive or private information then we can't really help them. The heuristics for detecting those kinds of things are just too difficult to implement.
> The victims should have known better than to trust corporations
Literally yes? Is this sarcasm? Are we in 2025 supposed to implicitly trust multi-billion dollar multi-national corporations that have decades' worth of abuses to look back on? As if we couldn't have seen this coming?
It's been part of every social media platform's ToS for many years that they get a license to do whatever they want with what you upload. People have warned others about this for years and nothing happened. Those platforms' have already used that data prior to this for image classification, identification and the like. But nothing happened. What's different now?
> blame the victims
If you post something publicly you cant be complaining that it is public.
But I can complain about what happens to said something. If my blog photo becomes deep fake porn am I allowed to complain or not ? What we have is an entirely novel situation (with ai) worth at least a serious discussion.
FWIW...I really don't think so. If you say, posted your photo on a bulletin board in your local City Hall, can you prevent it from being defaced? Can you choose who gets to look at it? Maybe they take a picture of it and trace it...do you have any legal ground there? (Genuine Question). And even if so...It's illegal to draw angry eyebrows on every face on a billboard but people still do it...
IMO, it being posted online to a publicly accessible site is the same. Don't post anything you don't want right-click-saved.
GDPR right to erasure says I can demand my personal data to be deleted, and I don't see any language limiting that to things _I_ submitted.
No. Don't give the entire world access to your photo. Creating fakes using photoshop was a thing well before AI.
> If my blog photo becomes deep fake porn
Depends. In most cases, this thing is forbidden by law and you can claim actual damages.
That's helpful if they live in the same country, can figure out who the 4chan poster was, the police are interested (or you want to risk paying a lawyer), you're willing to sink the time pursuing such action (and if criminal, risk adversarial LEO interaction), and are satisfied knowing hundreds of others may be doing the same and won't be deterred. Of course, friends and co-workers are too close to you to post publicly when they generate it. Thankfully, the Taylor Swift laws in the US have stopped generation of nonconsensual imagery and video of its namesake (it hasn't).
Daughter's school posted pictures of her online without an opt-out, but she's also on Facebook from family members and it's just kind of... well beyond the point of trying to suppress. Probably just best to accept people can imagine you naked, at any age, doing any thing. What's your neighbor doing with the images saved from his Ring camera pointed at the sidewalk? :shrug:
I am not talking about 4chan poster. I am talking if a company does it.
Don't have a blog photo in the first place.
> But I can complain about what happens to said something
no.
> but ...
no.
Sure, and if I put out a local lending library box in my front yard I shouldn't by annoyed by the neighbor that takes every book out of it and throws it in the trash.
Decorum and respect expectations don't disappear the moment it's technically feasible to be an asshole
That's a bad analogy. Most people including me do expect that their "public" data is used for AI training. I mean based on the ads everyone gets, most people know and expect completely well that anything they post online would be used in AI.
Are you trying to argue that 10 years ago when I uploaded my resume to linkedin, that I should have known it'd be used for AI training?
Or that teenager that signed up to facebook should know that the embarrassing things they're posting is going to train AI and is, as you called it, public?
What about the blog I started 25 years ago and then took down but it lives in the geocities archive. Was I supposed to know it'd go to an AI overlord corporation when I was in middle school writing about dragon photos I found on google?
And we're not even getting into data breaches, or something that was uploaded as private and then sold when the corporation changed their privacy policy decades after it was uploaded.
It's not a bad analogy when you don't give all the graces to corporations and none to the exploited.
"Corporations".... you gave access to the whole world, including criminals.
> Most people including me do expect that their "public" data is used for AI training.
Based on what ordinary people have been saying, I don't think this is true. Or, maybe it's true now that the cat is out of the bag, but I don't think most people expected this before.
Most tech-oriented people did, of course, but we're a small minority. And even amongst our subculture, a lot of people didn't see this abuse coming. I didn't, or I would have removed all of my websites from the public web years earlier than I did.
> Most tech-oriented people did
In fact it's the opposite. People who aren't into tech thinks Instagram is listening to them 24*7 to show feed and ads. There was even a hoax near my area among elderly groups that Whatsapp is using profile photo in illegal activity and many people removed their photo one time.
> I didn't, or I would have removed all of my websites from the public web years earlier than I did.
Your comment is public information. In fact posting anything in HN is a sure shot way to giving your content for AI training.
> People who aren't into tech thinks Instagram is listening to them 24*7 to show feed and ads
True, but that's a world different than thinking that your data will be used to train genAI.
> In fact posting anything in HN is a sure shot way to giving your content for AI training.
Indeed so, but HN seems to be a bad habit I just can't kick. However, my comments here are the entirety of what I put up on the open web and I intentionally keep them relatively shallow. I no longer do long-form blogging or make any of my code available on the open web.
However, you're right. Leaving HN is something that I need to do.
No, the average person has no idea what “ai training” even is. Should the average person have an above average iq? Yes. Could they? No. Don’t be average yourself.
Seriously, when YOU posted something on the Internet 20 years ago you expected it to be used by a corporation to train an AI 20 years later?
Data sourcing has been a discussion, at least in AI circles, for much longer than 20 years.
So if you are asking me, I would have to say yes. I cannot speak for the original poster.