I don't understand what you mean. What separates this from other fingerprinting techniques your company monetizes?
No software wants to be fingerprinted. If it did, it would offer an API with a stable identifier. All fingerprinting is exploiting unintended behavior of the target software or hardware.
It makes sense to me, they're likely not trying to actually fingerprint Tor users. Those users will likely ignore ads, have JS disabled, etc. the real audience is people on the web using normal tooling.
They can just flag all Tor users as high risk. They don't strictly need to fingerprint them when it's generally fine for websites to just block signups for Tor users or require further identification via phone number or something.
You want fingerprinting to identify low risk users to skip the inconvenient security checks.
Most users seem to not care about ad tech/tracking as much as technical users. Even further, most seem to want to enable more tracking to [protect the children or whatever the reason is] pretty regularly (at least in opinion polls about various legislation). ToR users are not at all like that + could be harmed in a very different way... so I think it's fair to frame them differently even if I'd personally say people should be wanting to treat both as similar offenses because neither should be seen as okay in my eyes.
> Most users seem to not care about ad tech/tracking
I don't think this is true.
Most people don't understand that they're being tracked. The ones that do generally don't understand to what extent.
You tend to get one of two responses: surprise or apathy. When people say "what are you going to do?" They don't mean "I don't care" they mean "I feel powerless to do anything about it, so I'll convince myself to not care or think about it". Honestly, the interpretation is fairly similar for when people say "but my data isn't useful" or "so what, they sell me ads (I use an ad blocker)". Those responses are mental defenses to reduce cognitive overload.
If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast." The number of people that are going to be okay with that will plummet. As soon as you change it from "Meta" to "some guy named Mark". You'll still get nervous jokes of "you're wasting money, I'm boring" but you think they wouldn't get upset if you actually hired a PI to do that?
The problem is people don't actually understand what's being recorded and what can be done with that information. If they did they'd be outraged because we're well beyond what 1984 proposed. In 1984 the government wasn't always watching. The premise was more about a country wide Panopticon. The government could be watching at any time. We're well past that. Not only can the government and corporations do that but they can look up historical records and some data is always being recorded.
So the reason I don't buy the argument is because 1984 is so well known. If people didn't care, no one would know about that book. The problem is people still think we're headed towards 1984 and don't realize we're 20 years into that world
> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast."
Yes and no, because people still will think that when it's done at scale it's different from some stalker following YOU explicitly, and not just following everybody. Also, the mental model is "they just want to sell me something, but I can just ignore and don't buy if I'm not really interested". And especially going down this second rabbit-hole opens a whole world about consumerism that not many people are comfortable with.
At the same time there are people that are totally against consumerism that should be more informed and care more about tracking and privacy; with those people it's probably easier to have that conversation.
Some good counterpoints. But you're suggesting more people would be okay with 'PI following them' hypothetical than GP suggests—simply with the knowledge that others are subject to the same degree of surveillance?
I'm not so sure that counterpoint in particular holds. I think to say the "number of people that are going to be okay with that will [still] plummet" is an understatement. I'd go so far as to say no one, at least no rational person, would be okay with a "record [of] who you talk to, when, how long, where you go, what you do, what you say, when you sleep", etc., just because of the scale.
> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person.
This is exactly what I was saying - if you look at the polls, people actually tend to support things like the UK's Online Safety Act. Explaining it more does not usually result in a change of that. The difference with a PI is you're asking about them individually instead of everyone - of course they trust themselves, they just want everyone surveilled for that same feeling of confidence.
This is a lot of text to say that people don't recognize digital tracking as a threat, even when it is explained to them. Which is basically exactly what parent post you replied to said.
My read of the comment is that it's almost never actually fully explained to them. And that they would almost certainly care if they actually understood what was happening. That's my experience. Once you explain that it's more information than a private investigator tailing you all day, stealing your phone could gather people usually wise up to the fact that they actually don't like it.
In my experience those users express a mix of surprise and irritation when they get ads about something they did minutes or hours before, but they accept that's the way things are.
I joke that I'm a no-app person, because I install very few apps and I use anti tracking tech on my phone that's even hard to explain or recommend to non technical friends. I use Firefox with uMatrix and uBlock Origin and Blockada. uMatrix is effective but breaks so many sites unless one invests time in playing with the matrix. Blockada breaks many important apps (banking) less one understands whitelisting.
> Most users seem to not care about ad tech/tracking as much as technical users.
Part of the problem is the misconception that the data being collected is only being used to determine which ads to show them. Companies love to frame it that way because ultimately people don't actually care that much about which ads they get shown. The more people get educated on the real world/offline uses of the data they're handing over the more they'll start to care about the tracking being done.
This is definitely a point that should be emphasized more in this discussion. Even still, where it ultimately falls flat (currently) is the lack of hard proof to show people that it's truly happening.
Also, the degree to which some are more comfortable with the personal privacy/'feeling of personal safety' tradeoff notwithstanding, the examples that do get media traction are predictably extremes that the average person doesn't feel applies to them.
Instead of trying convince-by-assertion, maybe you could try offering an actual objection to the argument raised up-thread?
On what basis do you claim that software developers, who did not establish a means of for third parties to get a stable identifier, nevertheless intended that fingerprinting techniques should work?
TBF the idea that any and all fingerprinting falls under the umbrella of exploiting a vulnerability was also presented as an assertion. At least personally I think it's a rather absurd notion.
Certainly you can exploit what I would consider a vulnerability to obtain information useful for fingerprinting. But you can also assemble readily available information and I don't think that doing so is an exploit though in most cases it probably qualifies as an unfortunate oversight on the part of the software developer.
You haven’t made an actual argument. You’ve made a repeated assertion that you feel so religiously about that you simultaneously can’t justify it and get very abrasive when someone asks you to back it up.
1) wanting functionality that isn't provided and working around that
and
2) restoring such functionality in the face of countermeasures
The absence of functionality isn't a clear signal of intent, while countermeasures against said functionality is.
And then there is the distinction between the intent of the software publisher and the intent of the user. There is a big ethical difference between "Mozilla doesn't want advertisers tracking their users" and "those users don't want to be tracked". If these guys want to draw the line at "if there is a signal from the user that they want privacy, we won't track them", I think that's reasonable.
The presence of the "Do Not Track" header was a pretty clear indicator of the intent of the user. Fingerprinting persisted exactly in the face of such countermeasures.
Even if the intent is clear I don't think the act of reading an available field qualifies as exploiting a vulnerability. IMO you need to actually work around a technical measure intended to stop you for it to qualify as an exploit.
Sure, my wording isn't perfect. I don't have a watertight definition ready to go. To my mind the spirit of the thing is that (for example) if a site has an http endpoint that accepts arbitrary sql queries and blindly runs them then sending your own custom query doesn't qualify as an exploit any more than scraping publicly accessible pages does. Whereas if you have to cleverly craft an sql query in a way that exploits string escapes in order to work around the restrictions that the backend has in place then that's technically an exploit (although it's an incredibly minor one against a piece of software whose developer has put on a display of utter incompetence).
The point isn't my precise wording but the underlying concept that making use of freely provided information isn't exploiting anything even if both the user and the developer are unhappy about the end result. Security boundaries are not defined post hoc by regret.
Side channels that enable intended behavior, versus a flat-out bug like the above, though the line can often be muddied by perspective.
An example that comes to mind that I've seen is an anonymous app that allows for blocking users; you can programmatically block users, query all posts, and diff the sets to identify stable identities. However, the ability to block users is desired by the app developers; they just may not have intended this behavior, but there's no immediate solution to this. This is different than 'user_id' simply being returned in the API for no reason, which is a vulnerability. Then there's maybe a case of the user_id being returned in the API for some reason that MIGHT be important too, but that could be implemented another way more sensibly; this leans more towards vulnerability.
Ultimately most fingerprinting technologies use features that are intended behavior; Canvas/font rendering is useful for some web features (and the web target means you have to support a LOT of use cases), IP address/cookies/useragent obviously are useful, etc (though there's some case to be made about Google's pushing for these features as an advertising company!).
> Ultimately most fingerprinting technologies use features that are intended behavior
Strong disagree.
> IP address/cookies/useragent obviously are useful
Cookies are an intended tracking behavior. IP Address, as a routing address, is debatable.
> Canvas/font rendering is useful for some web features
These two are actually wonderful examples of taking web features and using them as a _side channel_ in an unintended way to derive information that can be used to track people. A better argument would be things like Language and Timezone which you could argue "The browser clearly makes these available and intends to provide this information without restriction." Using side channels to determine what fonts a user has installed... well there's an API for doing just that[0] and we (Firefox) haven't implemented it for a reason.
n.b. I am Firefox's tech lead on anti-fingerprinting so I'm kind of biased =)
The thing is, technology is either enabling something or not. The exploration space might be huge, but once an exploit is found, the exploitation code / strategy / plan can trivially proceed and be shared worldwide. So you have to deal with this when you design and patch systems.
Example: preserving paths in URLs. Safari ITP aggressively removes “utm_” and other well-known querystring parameters even in links clicked from email. Well, it is trivial to embed it in a path instead, so that first-party websites can track attribution, eg for campaign perfomance or email verification links etc. In theory, Apple and Mozilla could actually play a cat-and-mouse game with links across all their users and actually remove high-entropy path segments or confuse websites so much that they give up on all attribution. Browser makers or email client makers or messenger makers could argue that users don’t want to have attribution of their link clicks tracked silently without their permission. They could then say if users really wanted, they could manually enter a code (assisted by the OS or browser) into a website, or simply provide interactive permission of being tracked after clicking a link, otherwise the website will receive some dummy results and break. Where is the line after all?
A vulnerability is distinct from unintended behavior.
Unintended identification is less than ideal but frankly is just the nature of doing business and any number of niceties are lost by aggressively avoiding fingerprinting.
In software intentionally optimized to avoid any fingerprinting however it is a vulnerability.
The distinction being that fingerprinting in general is a less than ideal side effect that gives you a minor loss in privacy but in something like Tor Browser that fingerprinting can be life or death for a whistleblower, etc. It's the distinction between an annoyance and an execution.
> fingerprinting in general is a less than ideal side effect that gives you a minor loss in privacy
In what way is collecting a record of a person's browsing history a "minor loss" of privacy. For many people, tracking everywhere they go online would easily expose the most sensitive personal information they have.
I think HN needs a refresher on responsible disclosure, and that even vulnerability scanners engage in this practice for obvious reasons in that it benefits both parties. One party gains exposure, and the other gets exposure and their bug squashed without the bug wrecking havoc while they try to squash it.
Logically, they are doing correlation via publically available information - maybe better than others can - and an identifier would hurt their business since competition can use it as well.
The real reason is that fingerprint.com's selling point is tracking over longer periods (months, their website claims), and this doesn't help them with that.
I’m going to go out on a limb and guess that you define “vulnerability” as something like “thing that will be fixed soon”. After all, Joe Random not liking a behavior doesn’t make it a vuln, there needs to be a litmus test. Am I close?
Should not, true, but in the case of many websites the reality is that allowing JS means you lost your privacy. Just like one cannot allow webgl and canvas by default any longer.
Thanks to all the web devs who helped creating this web dystopia.
You can't go out in public naked and just ask everyone to look away. If you want someone you don't trust to run unvetted general purpose code on your machine you have to accept that you are trading away some privacy. You can sandbox them (wear cloths) but that doesn't give you strict privacy.
When I go to https://noscriptfingerprint.com/ all I see is a blank page. My browser is pretty locked down in other ways which probably helps, but I'm still taking that as a good sign.
It means they are suspect. I think its right to be wary of motives if they are involved in the very thing they aim to bring awareness too. Questions arise in my mind as to why they would do something like this in the first place.
Its been my experience that the general public doesn't seem to follow patterns and instead focus on which switch is toggled at any given moment for a company's ethical practices. This is the main reason why we are constantly gamed by orgs that have a big picture view of crowd psychology.
I don't trust them more because of this and maybe they've disclosed it for the wrong reasons, like not allowing a competitor to use it when they don't, but at the end of the day they did disclose a serious issue, and that's good for users.
I understand where you're coming from, by the way, but sometimes the worst person you know does the right thing and it's not fair to criticize them for doing it (you could say nothing, don't have to change your opinion about them, etc). We also don't want someone to go "if I'm bad no matter what I do, then might as well make some money with this" and sell the exploit.
> I understand where you're coming from, by the way, but sometimes the worst person you know does the right thing and it's not fair to criticize them for doing it (you could say nothing, don't have to change your opinion about them, etc). We also don't want someone to go "if I'm bad no matter what I do, then might as well make some money with this" and sell the exploit.
I hear you. I guess I just want to promote more vigilance. Looking at patterns and motives helps us stay balanced about these things IMHO.
What are you even saying? It's like getting upset at somebody who criticizes a criminal because they once helped some grandma across the street. I'm not upset at the criminal because they helped a grandma across the street obviously that's not the fucking point.
I'm not upset, I just don't think we should criticize someone for doing something good. Maybe they're a terrible org, maybe they deserve criticism most of the time, but not in this instance.
It's not like you can't point out that they did a good deed, but that they're still in the shitty business of fingerprinting users.
Also, if people only get the stick no matter what they do, then eventually some will embrace the dark side and at least make money out of it. And that's not good for you.
And like a broken clock that is right twice a day, sometimes a corporation also does the right thing, even if for the wrong reasons.
Nothing wrong with pointing out hypocrisy and bullshit, but criticizing something they did right? That's not how I operate. You are, of course, free to do things differently.
The inverse is also true, letting them whitewash their image by pretending they care about your privacy and seek to protect you will be good for their public relations, but only if we let them. I refuse to be this gullible and run to their defense for no apparent reason.
They can pretend all they want. I know what their business is, my opinion on the practices haven't changed.
And yet, they did a good thing. I will criticize everything else, but not what they did right. It doesn't mean I'll go out of my way to praise them either... if it wasn't your comment, I wouldn't have said anything at all.
It's more like criticising a criminal when they are helping some grandma across the street, thereby treating them more harshly than the criminals that don't do that.
If you take their claim that they don’t use vulnerabilities in their products as true, then I don’t see a contradiction. If it isn’t true, then obviously there is a contradiction.
But your considering of all methods that enable fingerprinting as vulnerabilities is your own opinion. There are definitely measurable signals that are based on a user’s behavior, rather than data exposed by the browser itself.
I don't understand what you mean. What separates this from other fingerprinting techniques your company monetizes?
No software wants to be fingerprinted. If it did, it would offer an API with a stable identifier. All fingerprinting is exploiting unintended behavior of the target software or hardware.
It makes sense to me, they're likely not trying to actually fingerprint Tor users. Those users will likely ignore ads, have JS disabled, etc. the real audience is people on the web using normal tooling.
They can just flag all Tor users as high risk. They don't strictly need to fingerprint them when it's generally fine for websites to just block signups for Tor users or require further identification via phone number or something.
You want fingerprinting to identify low risk users to skip the inconvenient security checks.
Uhh okay, so they do exploit vulnerabilities, they just try to target victims who can be served ads? What a weird distinction.
Most users seem to not care about ad tech/tracking as much as technical users. Even further, most seem to want to enable more tracking to [protect the children or whatever the reason is] pretty regularly (at least in opinion polls about various legislation). ToR users are not at all like that + could be harmed in a very different way... so I think it's fair to frame them differently even if I'd personally say people should be wanting to treat both as similar offenses because neither should be seen as okay in my eyes.
Most people don't understand that they're being tracked. The ones that do generally don't understand to what extent.
You tend to get one of two responses: surprise or apathy. When people say "what are you going to do?" They don't mean "I don't care" they mean "I feel powerless to do anything about it, so I'll convince myself to not care or think about it". Honestly, the interpretation is fairly similar for when people say "but my data isn't useful" or "so what, they sell me ads (I use an ad blocker)". Those responses are mental defenses to reduce cognitive overload.
If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast." The number of people that are going to be okay with that will plummet. As soon as you change it from "Meta" to "some guy named Mark". You'll still get nervous jokes of "you're wasting money, I'm boring" but you think they wouldn't get upset if you actually hired a PI to do that?
The problem is people don't actually understand what's being recorded and what can be done with that information. If they did they'd be outraged because we're well beyond what 1984 proposed. In 1984 the government wasn't always watching. The premise was more about a country wide Panopticon. The government could be watching at any time. We're well past that. Not only can the government and corporations do that but they can look up historical records and some data is always being recorded.
So the reason I don't buy the argument is because 1984 is so well known. If people didn't care, no one would know about that book. The problem is people still think we're headed towards 1984 and don't realize we're 20 years into that world
> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast."
Yes and no, because people still will think that when it's done at scale it's different from some stalker following YOU explicitly, and not just following everybody. Also, the mental model is "they just want to sell me something, but I can just ignore and don't buy if I'm not really interested". And especially going down this second rabbit-hole opens a whole world about consumerism that not many people are comfortable with. At the same time there are people that are totally against consumerism that should be more informed and care more about tracking and privacy; with those people it's probably easier to have that conversation.
Some good counterpoints. But you're suggesting more people would be okay with 'PI following them' hypothetical than GP suggests—simply with the knowledge that others are subject to the same degree of surveillance?
I'm not so sure that counterpoint in particular holds. I think to say the "number of people that are going to be okay with that will [still] plummet" is an understatement. I'd go so far as to say no one, at least no rational person, would be okay with a "record [of] who you talk to, when, how long, where you go, what you do, what you say, when you sleep", etc., just because of the scale.
> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person.
This is exactly what I was saying - if you look at the polls, people actually tend to support things like the UK's Online Safety Act. Explaining it more does not usually result in a change of that. The difference with a PI is you're asking about them individually instead of everyone - of course they trust themselves, they just want everyone surveilled for that same feeling of confidence.
This is a lot of text to say that people don't recognize digital tracking as a threat, even when it is explained to them. Which is basically exactly what parent post you replied to said.
People don't care. This is demonstrably true.
My read of the comment is that it's almost never actually fully explained to them. And that they would almost certainly care if they actually understood what was happening. That's my experience. Once you explain that it's more information than a private investigator tailing you all day, stealing your phone could gather people usually wise up to the fact that they actually don't like it.
In my experience those users express a mix of surprise and irritation when they get ads about something they did minutes or hours before, but they accept that's the way things are.
I joke that I'm a no-app person, because I install very few apps and I use anti tracking tech on my phone that's even hard to explain or recommend to non technical friends. I use Firefox with uMatrix and uBlock Origin and Blockada. uMatrix is effective but breaks so many sites unless one invests time in playing with the matrix. Blockada breaks many important apps (banking) less one understands whitelisting.
> Most users seem to not care about ad tech/tracking as much as technical users.
Part of the problem is the misconception that the data being collected is only being used to determine which ads to show them. Companies love to frame it that way because ultimately people don't actually care that much about which ads they get shown. The more people get educated on the real world/offline uses of the data they're handing over the more they'll start to care about the tracking being done.
This is definitely a point that should be emphasized more in this discussion. Even still, where it ultimately falls flat (currently) is the lack of hard proof to show people that it's truly happening.
Also, the degree to which some are more comfortable with the personal privacy/'feeling of personal safety' tradeoff notwithstanding, the examples that do get media traction are predictably extremes that the average person doesn't feel applies to them.
Ad tracking data has been used to target ICE raids.
Well presumably they want to make money.
Painting fingerprinting as vulnerability exploit is your own very biased and very out-of-norm framing.
Instead of trying convince-by-assertion, maybe you could try offering an actual objection to the argument raised up-thread?
On what basis do you claim that software developers, who did not establish a means of for third parties to get a stable identifier, nevertheless intended that fingerprinting techniques should work?
> Instead of trying convince-by-assertion
TBF the idea that any and all fingerprinting falls under the umbrella of exploiting a vulnerability was also presented as an assertion. At least personally I think it's a rather absurd notion.
Certainly you can exploit what I would consider a vulnerability to obtain information useful for fingerprinting. But you can also assemble readily available information and I don't think that doing so is an exploit though in most cases it probably qualifies as an unfortunate oversight on the part of the software developer.
For the readers convenience I restated the argument also in my post, but if you look you can see it was also stated much earlier in the thread.
You haven’t made an actual argument. You’ve made a repeated assertion that you feel so religiously about that you simultaneously can’t justify it and get very abrasive when someone asks you to back it up.
There's a pretty big difference between:
1) wanting functionality that isn't provided and working around that
and
2) restoring such functionality in the face of countermeasures
The absence of functionality isn't a clear signal of intent, while countermeasures against said functionality is.
And then there is the distinction between the intent of the software publisher and the intent of the user. There is a big ethical difference between "Mozilla doesn't want advertisers tracking their users" and "those users don't want to be tracked". If these guys want to draw the line at "if there is a signal from the user that they want privacy, we won't track them", I think that's reasonable.
The presence of the "Do Not Track" header was a pretty clear indicator of the intent of the user. Fingerprinting persisted exactly in the face of such countermeasures.
Even if the intent is clear I don't think the act of reading an available field qualifies as exploiting a vulnerability. IMO you need to actually work around a technical measure intended to stop you for it to qualify as an exploit.
Here's the technical measures that are being worked around: https://blog.mozilla.org/en/firefox/fingerprinting-protectio...
> IMO you need to actually work around a technical measure intended to stop you for it to qualify as an exploit.
Even well-known vulnerabilities like SQL injection don't qualify under this definition?
Sure, my wording isn't perfect. I don't have a watertight definition ready to go. To my mind the spirit of the thing is that (for example) if a site has an http endpoint that accepts arbitrary sql queries and blindly runs them then sending your own custom query doesn't qualify as an exploit any more than scraping publicly accessible pages does. Whereas if you have to cleverly craft an sql query in a way that exploits string escapes in order to work around the restrictions that the backend has in place then that's technically an exploit (although it's an incredibly minor one against a piece of software whose developer has put on a display of utter incompetence).
The point isn't my precise wording but the underlying concept that making use of freely provided information isn't exploiting anything even if both the user and the developer are unhappy about the end result. Security boundaries are not defined post hoc by regret.
How would you frame it?
Side channels that enable intended behavior, versus a flat-out bug like the above, though the line can often be muddied by perspective.
An example that comes to mind that I've seen is an anonymous app that allows for blocking users; you can programmatically block users, query all posts, and diff the sets to identify stable identities. However, the ability to block users is desired by the app developers; they just may not have intended this behavior, but there's no immediate solution to this. This is different than 'user_id' simply being returned in the API for no reason, which is a vulnerability. Then there's maybe a case of the user_id being returned in the API for some reason that MIGHT be important too, but that could be implemented another way more sensibly; this leans more towards vulnerability.
Ultimately most fingerprinting technologies use features that are intended behavior; Canvas/font rendering is useful for some web features (and the web target means you have to support a LOT of use cases), IP address/cookies/useragent obviously are useful, etc (though there's some case to be made about Google's pushing for these features as an advertising company!).
> Ultimately most fingerprinting technologies use features that are intended behavior
Strong disagree.
> IP address/cookies/useragent obviously are useful
Cookies are an intended tracking behavior. IP Address, as a routing address, is debatable.
> Canvas/font rendering is useful for some web features
These two are actually wonderful examples of taking web features and using them as a _side channel_ in an unintended way to derive information that can be used to track people. A better argument would be things like Language and Timezone which you could argue "The browser clearly makes these available and intends to provide this information without restriction." Using side channels to determine what fonts a user has installed... well there's an API for doing just that[0] and we (Firefox) haven't implemented it for a reason.
n.b. I am Firefox's tech lead on anti-fingerprinting so I'm kind of biased =)
[0] https://developer.mozilla.org/en-US/docs/Web/API/Local_Font_...
Security by obscurity through morality? :)
The thing is, technology is either enabling something or not. The exploration space might be huge, but once an exploit is found, the exploitation code / strategy / plan can trivially proceed and be shared worldwide. So you have to deal with this when you design and patch systems.
Example: preserving paths in URLs. Safari ITP aggressively removes “utm_” and other well-known querystring parameters even in links clicked from email. Well, it is trivial to embed it in a path instead, so that first-party websites can track attribution, eg for campaign perfomance or email verification links etc. In theory, Apple and Mozilla could actually play a cat-and-mouse game with links across all their users and actually remove high-entropy path segments or confuse websites so much that they give up on all attribution. Browser makers or email client makers or messenger makers could argue that users don’t want to have attribution of their link clicks tracked silently without their permission. They could then say if users really wanted, they could manually enter a code (assisted by the OS or browser) into a website, or simply provide interactive permission of being tracked after clicking a link, otherwise the website will receive some dummy results and break. Where is the line after all?
A vulnerability is distinct from unintended behavior.
Unintended identification is less than ideal but frankly is just the nature of doing business and any number of niceties are lost by aggressively avoiding fingerprinting.
In software intentionally optimized to avoid any fingerprinting however it is a vulnerability.
The distinction being that fingerprinting in general is a less than ideal side effect that gives you a minor loss in privacy but in something like Tor Browser that fingerprinting can be life or death for a whistleblower, etc. It's the distinction between an annoyance and an execution.
> fingerprinting in general is a less than ideal side effect that gives you a minor loss in privacy
In what way is collecting a record of a person's browsing history a "minor loss" of privacy. For many people, tracking everywhere they go online would easily expose the most sensitive personal information they have.
Iffy vs grossly unethical.
Someone discovering and making this public it doesn't mean others haven't independently discovered it.
I think HN needs a refresher on responsible disclosure, and that even vulnerability scanners engage in this practice for obvious reasons in that it benefits both parties. One party gains exposure, and the other gets exposure and their bug squashed without the bug wrecking havoc while they try to squash it.
Logically, they are doing correlation via publically available information - maybe better than others can - and an identifier would hurt their business since competition can use it as well.
The real reason is that fingerprint.com's selling point is tracking over longer periods (months, their website claims), and this doesn't help them with that.
it allows you to track a browser forever because it is stable fingerprint point. This helps with long term tracking a great deal.
If I understand correctly, it was only stable until you restarted Firefox / your computer.
Ok that’s change it a bit but on the other hand I’ve had my browser open for weeks now and I only restart it when the “update” button turns red lol
correct. the ordering persists for as long as the original process continues to run
I’m going to go out on a limb and guess that you define “vulnerability” as something like “thing that will be fixed soon”. After all, Joe Random not liking a behavior doesn’t make it a vuln, there needs to be a litmus test. Am I close?
All fingerprinting is a vulnerability, unless the client opts-in.
The opt in checkbox is labeled "Enable Javascript"
Ridiculous comment. People should not have to choose between functionality and privacy.
Should not, true, but in the case of many websites the reality is that allowing JS means you lost your privacy. Just like one cannot allow webgl and canvas by default any longer. Thanks to all the web devs who helped creating this web dystopia.
You can't go out in public naked and just ask everyone to look away. If you want someone you don't trust to run unvetted general purpose code on your machine you have to accept that you are trading away some privacy. You can sandbox them (wear cloths) but that doesn't give you strict privacy.
Implement it then.
Ah yes, the age old reply when people exhausted all arguments.
https://fingerprint.com/blog/disabling-javascript-wont-stop-...
https://github.com/jonasstrehle/supercookie
When I go to https://noscriptfingerprint.com/ all I see is a blank page. My browser is pretty locked down in other ways which probably helps, but I'm still taking that as a good sign.
The site seems to have been taken offline, but the code is here: https://github.com/fingerprintjs/blog-nojs-fingerprint-demo/
Any method of “fingerprinting” and invading a browser’s privacy is inherently an exploit.
[flagged]
Would you prefer that they kept this for themselves instead of disclosing it?
I get criticizing their business and what they do wrong, but doesn't seem right to criticizing them for doing the right thing.
It means they are suspect. I think its right to be wary of motives if they are involved in the very thing they aim to bring awareness too. Questions arise in my mind as to why they would do something like this in the first place.
Its been my experience that the general public doesn't seem to follow patterns and instead focus on which switch is toggled at any given moment for a company's ethical practices. This is the main reason why we are constantly gamed by orgs that have a big picture view of crowd psychology.
I don't trust them more because of this and maybe they've disclosed it for the wrong reasons, like not allowing a competitor to use it when they don't, but at the end of the day they did disclose a serious issue, and that's good for users.
I understand where you're coming from, by the way, but sometimes the worst person you know does the right thing and it's not fair to criticize them for doing it (you could say nothing, don't have to change your opinion about them, etc). We also don't want someone to go "if I'm bad no matter what I do, then might as well make some money with this" and sell the exploit.
> I understand where you're coming from, by the way, but sometimes the worst person you know does the right thing and it's not fair to criticize them for doing it (you could say nothing, don't have to change your opinion about them, etc). We also don't want someone to go "if I'm bad no matter what I do, then might as well make some money with this" and sell the exploit.
I hear you. I guess I just want to promote more vigilance. Looking at patterns and motives helps us stay balanced about these things IMHO.
What are you even saying? It's like getting upset at somebody who criticizes a criminal because they once helped some grandma across the street. I'm not upset at the criminal because they helped a grandma across the street obviously that's not the fucking point.
I'm not upset, I just don't think we should criticize someone for doing something good. Maybe they're a terrible org, maybe they deserve criticism most of the time, but not in this instance.
It's not like you can't point out that they did a good deed, but that they're still in the shitty business of fingerprinting users.
Also, if people only get the stick no matter what they do, then eventually some will embrace the dark side and at least make money out of it. And that's not good for you.
This isn't a someone. It's a corporation, a legal fiction explicitly designed to dissolve responsibility.
And like a broken clock that is right twice a day, sometimes a corporation also does the right thing, even if for the wrong reasons.
Nothing wrong with pointing out hypocrisy and bullshit, but criticizing something they did right? That's not how I operate. You are, of course, free to do things differently.
The inverse is also true, letting them whitewash their image by pretending they care about your privacy and seek to protect you will be good for their public relations, but only if we let them. I refuse to be this gullible and run to their defense for no apparent reason.
They can pretend all they want. I know what their business is, my opinion on the practices haven't changed.
And yet, they did a good thing. I will criticize everything else, but not what they did right. It doesn't mean I'll go out of my way to praise them either... if it wasn't your comment, I wouldn't have said anything at all.
It's more like criticising a criminal when they are helping some grandma across the street, thereby treating them more harshly than the criminals that don't do that.
(Also known as the "Copenhagen Interpretation of Ethics": https://gwern.net/doc/philosophy/ethics/2015-06-24-jai-theco... )
Responsible disclosure and commercial fingerprinting aren't contradictory.
[flagged]
If you take their claim that they don’t use vulnerabilities in their products as true, then I don’t see a contradiction. If it isn’t true, then obviously there is a contradiction.
But your considering of all methods that enable fingerprinting as vulnerabilities is your own opinion. There are definitely measurable signals that are based on a user’s behavior, rather than data exposed by the browser itself.
It's a little bit disingenuous to call intentional wont-fix features "vulnerabilities".