Uhh okay, so they do exploit vulnerabilities, they just try to target victims who can be served ads? What a weird distinction.

Most users seem to not care about ad tech/tracking as much as technical users. Even further, most seem to want to enable more tracking to [protect the children or whatever the reason is] pretty regularly (at least in opinion polls about various legislation). ToR users are not at all like that + could be harmed in a very different way... so I think it's fair to frame them differently even if I'd personally say people should be wanting to treat both as similar offenses because neither should be seen as okay in my eyes.

  > Most users seem to not care about ad tech/tracking
I don't think this is true.

Most people don't understand that they're being tracked. The ones that do generally don't understand to what extent.

You tend to get one of two responses: surprise or apathy. When people say "what are you going to do?" They don't mean "I don't care" they mean "I feel powerless to do anything about it, so I'll convince myself to not care or think about it". Honestly, the interpretation is fairly similar for when people say "but my data isn't useful" or "so what, they sell me ads (I use an ad blocker)". Those responses are mental defenses to reduce cognitive overload.

If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast." The number of people that are going to be okay with that will plummet. As soon as you change it from "Meta" to "some guy named Mark". You'll still get nervous jokes of "you're wasting money, I'm boring" but you think they wouldn't get upset if you actually hired a PI to do that?

The problem is people don't actually understand what's being recorded and what can be done with that information. If they did they'd be outraged because we're well beyond what 1984 proposed. In 1984 the government wasn't always watching. The premise was more about a country wide Panopticon. The government could be watching at any time. We're well past that. Not only can the government and corporations do that but they can look up historical records and some data is always being recorded.

So the reason I don't buy the argument is because 1984 is so well known. If people didn't care, no one would know about that book. The problem is people still think we're headed towards 1984 and don't realize we're 20 years into that world

> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast."

Yes and no, because people still will think that when it's done at scale it's different from some stalker following YOU explicitly, and not just following everybody. Also, the mental model is "they just want to sell me something, but I can just ignore and don't buy if I'm not really interested". And especially going down this second rabbit-hole opens a whole world about consumerism that not many people are comfortable with. At the same time there are people that are totally against consumerism that should be more informed and care more about tracking and privacy; with those people it's probably easier to have that conversation.

Some good counterpoints. But you're suggesting more people would be okay with 'PI following them' hypothetical than GP suggests—simply with the knowledge that others are subject to the same degree of surveillance?

I'm not so sure that counterpoint in particular holds. I think to say the "number of people that are going to be okay with that will [still] plummet" is an understatement. I'd go so far as to say no one, at least no rational person, would be okay with a "record [of] who you talk to, when, how long, where you go, what you do, what you say, when you sleep", etc., just because of the scale.

> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person.

This is exactly what I was saying - if you look at the polls, people actually tend to support things like the UK's Online Safety Act. Explaining it more does not usually result in a change of that. The difference with a PI is you're asking about them individually instead of everyone - of course they trust themselves, they just want everyone surveilled for that same feeling of confidence.

This is a lot of text to say that people don't recognize digital tracking as a threat, even when it is explained to them. Which is basically exactly what parent post you replied to said.

People don't care. This is demonstrably true.

My read of the comment is that it's almost never actually fully explained to them. And that they would almost certainly care if they actually understood what was happening. That's my experience. Once you explain that it's more information than a private investigator tailing you all day, stealing your phone could gather people usually wise up to the fact that they actually don't like it.

In my experience those users express a mix of surprise and irritation when they get ads about something they did minutes or hours before, but they accept that's the way things are.

I joke that I'm a no-app person, because I install very few apps and I use anti tracking tech on my phone that's even hard to explain or recommend to non technical friends. I use Firefox with uMatrix and uBlock Origin and Blockada. uMatrix is effective but breaks so many sites unless one invests time in playing with the matrix. Blockada breaks many important apps (banking) less one understands whitelisting.

> Most users seem to not care about ad tech/tracking as much as technical users.

Part of the problem is the misconception that the data being collected is only being used to determine which ads to show them. Companies love to frame it that way because ultimately people don't actually care that much about which ads they get shown. The more people get educated on the real world/offline uses of the data they're handing over the more they'll start to care about the tracking being done.

This is definitely a point that should be emphasized more in this discussion. Even still, where it ultimately falls flat (currently) is the lack of hard proof to show people that it's truly happening.

Also, the degree to which some are more comfortable with the personal privacy/'feeling of personal safety' tradeoff notwithstanding, the examples that do get media traction are predictably extremes that the average person doesn't feel applies to them.

Ad tracking data has been used to target ICE raids.

Well presumably they want to make money.

Painting fingerprinting as vulnerability exploit is your own very biased and very out-of-norm framing.

Instead of trying convince-by-assertion, maybe you could try offering an actual objection to the argument raised up-thread?

On what basis do you claim that software developers, who did not establish a means of for third parties to get a stable identifier, nevertheless intended that fingerprinting techniques should work?

> Instead of trying convince-by-assertion

TBF the idea that any and all fingerprinting falls under the umbrella of exploiting a vulnerability was also presented as an assertion. At least personally I think it's a rather absurd notion.

Certainly you can exploit what I would consider a vulnerability to obtain information useful for fingerprinting. But you can also assemble readily available information and I don't think that doing so is an exploit though in most cases it probably qualifies as an unfortunate oversight on the part of the software developer.

For the readers convenience I restated the argument also in my post, but if you look you can see it was also stated much earlier in the thread.

You haven’t made an actual argument. You’ve made a repeated assertion that you feel so religiously about that you simultaneously can’t justify it and get very abrasive when someone asks you to back it up.

There's a pretty big difference between:

1) wanting functionality that isn't provided and working around that

and

2) restoring such functionality in the face of countermeasures

The absence of functionality isn't a clear signal of intent, while countermeasures against said functionality is.

And then there is the distinction between the intent of the software publisher and the intent of the user. There is a big ethical difference between "Mozilla doesn't want advertisers tracking their users" and "those users don't want to be tracked". If these guys want to draw the line at "if there is a signal from the user that they want privacy, we won't track them", I think that's reasonable.

The presence of the "Do Not Track" header was a pretty clear indicator of the intent of the user. Fingerprinting persisted exactly in the face of such countermeasures.

Even if the intent is clear I don't think the act of reading an available field qualifies as exploiting a vulnerability. IMO you need to actually work around a technical measure intended to stop you for it to qualify as an exploit.

Here's the technical measures that are being worked around: https://blog.mozilla.org/en/firefox/fingerprinting-protectio...

> IMO you need to actually work around a technical measure intended to stop you for it to qualify as an exploit.

Even well-known vulnerabilities like SQL injection don't qualify under this definition?

Sure, my wording isn't perfect. I don't have a watertight definition ready to go. To my mind the spirit of the thing is that (for example) if a site has an http endpoint that accepts arbitrary sql queries and blindly runs them then sending your own custom query doesn't qualify as an exploit any more than scraping publicly accessible pages does. Whereas if you have to cleverly craft an sql query in a way that exploits string escapes in order to work around the restrictions that the backend has in place then that's technically an exploit (although it's an incredibly minor one against a piece of software whose developer has put on a display of utter incompetence).

The point isn't my precise wording but the underlying concept that making use of freely provided information isn't exploiting anything even if both the user and the developer are unhappy about the end result. Security boundaries are not defined post hoc by regret.

How would you frame it?