It's generally good advice, but I don't see that Safe Browsing did anything wrong in this case. First, it sounds like they actually were briefly hosting phishing sites:
> All sites on statichost.eu get a SITE-NAME.statichost.eu domain, and during the weekend there was an influx of phishing sites.
Second, they should be using the public suffix list (https://publicsuffix.org/) to avoid having their entire domain tagged. How else is Google supposed to know that subdomains belong to different users? That's what the PSL is for.
From my reading, Safe Browsing did its job correctly in this case, and they restored the site quickly once the threat was removed.
I'm not saying that Google or Safe Browsing in particular did anything wrong per se. My point is primarily that Google has too much power over the internet. I know that in this case what actually happened is because of me not putting enough effort into fending off bad guys.
The new separate domain is pending inclusion in the PSL, yes.
Edit: the "effort" I'm talking about above refers to more real time moderation of content.
> My point is primarily that Google has too much power over the internet.
That is probably true, but in this case I think most people would think that they used that power for good.
It was inconvenient for you and the legitimate parts of what was hosted on your domain, but it was blocking genuinely phishing content that was also hosted on your domain.
Every website operator employee worth their salary in this area would have told the site's operator this beforehand, and could have avoided this incident. Hell, even ChatGPT could tell you that by now. The word that comes to mind is incompetence on someone's part, but I don't know of the details on particularly who was the incompetent one in this situation. Thankfully, they've learned a lesson about the situation and ideally won't make the same mistake again going forwards.
I disagree, as a professional in this field for over a decade.
For this to be a legitimately backed statement, professional's would have needed to know about the PSL. This is largely unmet.
For it to be met, there would need to be documentation in the form of RFC's and whitepapers in industry working groups which would be needed. This didn't happen.
M3AAWG only has two blog post mentions, and that's only after the great layoffs of 2023, and only that its being used by volunteers and needs support. No discussion about organization, what its being used for, process/due process, etc.
It wholly lacks the needed outreach to professionals in order to make such a statement and have it be true.
I mean, it's a very big field, and it's easy enough for me to armchair quarterback and call it a skill issue without being vulnerable and putting my own credentials into question. There's a whole big world of things to know about making and running websites, and I'll readily admit I don't know everything. I don't do a lot of CSS or website SEO or run ad campaigns, so someone experienced there will run circles around me.
Putting user generated content on its own domain is more on the security side of things to know about running a website, and our industry doesn't regulate who's allowed to build websites. Everyone's got their own set of different best practices.
Regardless of the exact date that GitHub moved which kinds of user generated content (UGC) over to which domain/domains, I do expect a curious webdev in 2025 to have used GitHub and to have wondered enough about it to ask what's up with stuff coming from eg raw.githubusercontent.com at some point in their web browsing career to ask Google about it. They should have walked away with the idea that they're putting UGC on a separate domain intentionally for security stuff, even if they never hear mention of the PSL or how exactly it works and is implemented. The /r/webdev post you'll find links to a GitHub blog post that gives a lot of detail as to why they did that, and that doesn't mention the PSL once.
It's fair to point out the PSL isn't common knowledge. I would agree that it isn't. I don't think it's necessary, however. All it takes is being a user of GitHub and a modicum of curiosity. I expect anyone that call themselves a webdev in 2025 to be able to explain to me what git and GitHub is and why they're different. They don't need to know where git came from but I don't think I'm being unreasonable in asking that much. From there, I expect someone to be able to make up an answer as to why there's raw.githubusercontent.com during an interview and mumble something about security, even if they can't give specific details about cookies and phishing and how that all works.
It's possible I'm being unreasonable here but I don't think I am. This isn't knowledge that takes attending W3C meetings about web browser standards to have come across. Regardless of if I am or not though, everyone who's come across this thread should now know that UGC goes in its own domain, even if they can't give details as to why.
I agree this isn't knowledge that takes a lot. The problem is these companies don't explain why they do what they do, in fact a lot of security stuff along these lines in the past has been tight-lipped secrecy bound stuff. You can wonder, but the answer isn't out there unless you know an insider willing to break a broadly worded NDA (not gonna happen, and some are quite broad).
The idea to segment certain types of traffic to different domains isn't that new. For example segmenting certain mail servers by marketing or transactional types into subdomains was done as far back as 2010, but it wasn't explained in whitepapers until around 2016 or 2017, where there was already gathered irrefutable evidence that reputational systems had been put in place and the rules for those damaged people running small email servers who were being illegitimately blocked from delivery; for years with no recourse or disclosure just imposed cost.
Once they published the whitepapers on that, professionals were on board because they specified what they were looking for, and how it should function. Basic Engineering stuff that people who manage and build these systems need to know to interoperate.
These things need professional outreach that standardizes it in some form or another, that's not a one-off blog post imo, and that must fully specify function, requirement, feedback mechanisms, and expectations of how its supposed to work; basic engineering stuff.
The PSL is just the same thing all over again. Big Tech just starts doing something silently that directly imposes cost on others, they don't say what they are doing. Then when it becomes too costly they try to offload it to others calling for support, though if they only do halfsies in a blog post buried in noise, they are only looking for plausible deniability.
The benefit in doing this is in anti-competitive behavior.
Incidentally, while separating subdomains for email servers has been standard practice for awhile now, recently these companies once again changed the reputational weights for things, and they aren't talking. Now its a whole domain as a single reputational namespace not just breakage at the subdomain (bb.aa.com.). No outreach on that as far as I've seen.
There are ways to do things correctly, and then there are ways to do things anti-competitively and coercively. The incentives matched to the outcomes point to which one that happens to be.
How you do something is more important than that you did something in these cases.
If you as a company don't do professional outreach about such changes or standards, and you arbitrarily choose to require something that isn't properly disclosed punishing everyone that hasn't received disclosure; that in my mind is a fair and reasonable case for either gross negligence (for general intent to prove malice) or tortuous interference with third-party companies businesses.
That question which you mentioned about asking in an interview (iirc) was actually asked in an Ignite interview, but was cut out from the recordings later, and the answer was we can't talk about what other departments are doing. They may have followed-up on that elsewhere but I never saw anything related to it.
It is critically important to know the reasons why things are structured a certain way or happen; in order to be able to interoperate. This is and has been known and repeated many times since the adoption of OSI & TCP in the 80s/90s with regards to interoperability of systems.
Blindly copying what others do is a recipe for disaster and isn't justifiable in terms of cost, and competent professional's don't roll the dice like that on large projects of that caliber of expense.
This stuff isn't straight forward either. Like knowing where the reputational namespace stops, what the ramp-up time (dm/dt) is for volume metrics to warm up a server at each provider, and objective indicators associated with when you go above that arbitrarily designed rate. (hint: non-deterministic hidden states) If it takes a month to perfectly warm a new server up without reputational consequences by an insider that knows, that's extra cost imposed on the company by that platform (whom you are competing against for email services).
No disclosure means starting over every time trying to guess at what they are doing, and having breakage later when they change things.
> reddit...
A lot of professionals no longer use reddit anymore because its a bot filled echo chamber that wastes valuable time.
Moderators there often remove posts regularly for simple disagreement, conflicts of interest, or to remove access to detailed solutions or methodology.
For an example of all that's wrong there, look to that CodingBootCamp reddit. There's a guy that's a moderator there that's been, in all probability, using a bot to destroy a competitors reputation and harass them for years, attacking the owners, execs, and going so far as to harass and stalk their children; while violating the Moderator Code of Contact. Crazy and toxic stuff. ---
You can't ever meet professional standards if you don't communicate or properly disclose interop requirements when complex systems are involved.
"Google does good thing, therefore Google has too much power over the internet" is not a convincing point to make.
This safety feature saves a nontrivial number of people from life-changing mistakes. Yes we publishers have to take extra care. Hard to see a negative here.
I respectfully disagree with your premise. In this specific case, yes, "Google does good thing" in a sense. That is not why I'm saying Google has too much power. "Too much" is relative and whether they do good or bad debatable, of course, but it's hard to argue that they don't have a gigantic influence on the whole internet, no? :)
Helping people avoid potentially devastating mistakes is of course a good thing.
What point are you trying to make here? You hosted phishing sites on your primary domain, which was then flagged as unsafe. You chose not to use the tools that would have marked those sites as belonging to individual users, and the system worked as designed.
Please note that this tool (PSL) is not available until you have a significant user base. Which probably means a significant amount of spam as well.
Where'd you see/hear that? It hasn't been my experience at least - but maybe I've just been lucky or undercounting the sites.
There are required steps to follow but none are "have x users" or "see a lot of spam". It's mostly "follow proper DNS steps and guidelines in the given format" with a little "show you're doing this for the intended reason rather than to circumvent something the PSL is not meant for/for something the public can't get to anyways" (e.g. tricking rate limits, internal only or single user personal sites) added on top.
https://github.com/publicsuffix/list/wiki/Guidelines#validat...
"Projects that are smaller in scale or are temporary or seasonal in nature will likely be declined. Examples of this might be private-use, sandbox, test, lab, beta, or other exploratory nature changes or requests. It should be expected that despite whatever site or service referred a requestor to seek addition of their domain(s) to the list, projects not serving more then thousands of users are quite likely to be declined."
Maybe the rules have changed, or maybe you were lucky? :)
Ah yeah, looks like it was added in 2022 https://github.com/publicsuffix/list/wiki/Guidelines/_compar...
Thanks for the note!
You're not wrong. You just picked a poor example which illustrates the opposite of the point you're making.
Fair enough! :)
> but it's hard to argue that they don't have a gigantic influence on the whole internet, no? :)
Then don't relate this to safe browsing. What is the connection?
You could have just written a one liner. Google has too much power. This has nothing to do with safe-browsing.
In fact you could write...
- USA/China/EU etc has too much power..
You use the word relative in another reply..
Same way.. My employer has relatively too much power...
Is it? Companies like Google coddle users instead of teaching them how to browse smarter and detect phishing for themselves. Google wants people to stay ignorant so they can squeeze them for money instead of phishers.
How does Google get money out of people in that case? As a corporation, Google contributes greatly to the education sector and also profits greatly, so it seems like they're pro-education to me, and are merely making the best of a bad situation, but I'd love to hear how Google extracts money from the people they've protected from phishing schemes in some secret way that I haven't considered. I do happen to have Google stock in my portfolio though, so maybe that indight's my entire comment for you though.
This is a fine mentality when it takes a certain amount of "Internet street smarts" (a term used in the article) to access the internet - at least beyond AOL etc.
But over half of the world has internet access, mostly via Chrome (largely via Android inclusion). At least some frontline protection (that can be turned off) is warranted when you need to cater to at least the millions of people who just started accessing the internet today, and the billions who don't/can't/won't put the effort in to learn those "Internet street smarts".
How does flagging a domain that was actively hosting phishing sites demonstrate that Google has too much power? They do, but this is a terrible example, undermining any point you are trying to make.
The thing about Google is that they regularly get this stuff wrong, and there is no recourse when they do.
I think most people working in tech know the extent to which Google can screw over a business when they make a mistake, but the gravity of the situation becomes much clearer when it actually happens to you.
This time it's a phishing website, but what if the same happens five years down the line because of an unflattering page about a megalomaniac US politician?
Then that would be an example of a system having failed and one that needs to change. Instead, this is an example of a hosting company complaining about the consequences of skipping some of the basic, well-documented safety and security practices that help to isolate domains for all sorts of reasons, from reputation to little things like user cookies.
This article shows an example of this process working as intended though.
The user's site was hosting phishing material. Google showed the site owner what was wrong, provided concrete steps to remedy the situation, and removed the warning within a few hours of being notified that it was resolved.
Google's support sucks in other ways, but this particular example went very smoothly.
> Oh my god, my site was unavailable for 7 hours because I hosted phishing!
Won't someone please think of the website operator?
Maybe google can have large impact is a more accurate way of putting it vs power.
There are two aspects to the Internet: the technical and the social.
In the social, there is always someone with most of the power (distributed power is an unstable equilibrium), and it's incumbent upon us, the web developers, to know the current status quo.
Back in the day, if you weren't testing on IE6 you weren't serving a critical mass of your potential users. Nowadays, the nameplates have changed but the same principles hold.
Social wasn't always sole powered, only began with the later social networks, not the early. And now people are retreating to smaller communities anyways.
Testing on IE6 wasn't the requirement, all browser's was. IE shipped default on windows and basically forced themselves into the browser conversation with an incomplete browser.
I don't mean social as in social network. I mean that people have always been a key aspect of the technology and how it it practically works.
Yes, yes, IE6 shipped by default shipped by default on Windows. And therefore if you wanted a website that worked, you tested against IE6. Otherwise people would try and use your website and it wouldn't work and they wouldn't blame the browser, they would blame your website.
Those social aspects introduce a bunch of not necessarily written rules that you just have to know and learn as you develop for the web.
> Google has too much power over the internet.
In this case they did use it for good cause. Yes, alternatively you could have prevented the whole thing from happening if you cared about customers.
Exactly.
> Second, they should be using the public suffix list (https://publicsuffix.org/) to avoid having their entire domain tagged.
NO, Google should be "mindful" (I know companies are not people but w/e) of the power it unfortunately has. Also, Cloudflare. All my homies hate Cloudflare.
It is mindful.
... by using the agreed-upon tool to track domains that treat themselves as TLDs for third-party content: the public suffix list. Microsoft Edge and Firefox also use the PSL and their mechanisms for protecting users would be similarly suspicious that attacks originating from statichost.eu were originating from the owners of that domain and not some third-party that happened to independently control foo.statichost.eu.
Getting on the public suffix list is easier said than done [1]. They can simply say no if they feel like it and are making sure to be able to keep said rights as a "project" vs a "business," [2] which has its pros and cons.
[1] https://github.com/publicsuffix/list/blob/main/public_suffix...
[2] https://groups.google.com/g/publicsuffix-discuss/c/xJZHBlyqq...
> Getting on the public suffix list is easier said than done [1].
Can you elaborate on this? I didn't see anything in either link that would indicate unreasonable challenges. The PSL naturally has a a series of validation requirements, but I haven't heard of any undue shenanigans.
Is it great that such vital infrastructure is held together by a ragtag band of unpaid volunteers? No; but that's hardly unique in this space.
> Second, they should be using the public suffix list (https://publicsuffix.org/) to avoid having their entire domain tagged. How else is Google supposed to know that subdomains belong to different users? That's what the PSL is for.
How is this kinda not insane? https://publicsuffix.org/list/public_suffix_list.dat
A centralized list, where you have to apply to be included and it's up to someone else to decide whether you will be allowed in? How is this what they went for: "You want to specify some rules around how subdomains should be treated? Sure, name EVERY domain that this applies to."
Why not just something like https://example.com/.well-known/suffixes.dat at the main domain or whatever? Regardless of the particulars, this feels like it should have been an RFC and a standard that avoids such centralization.
There was an IETF working group that was working on a more distributed alternative based on a DNS record (so you could make statements in the DNS about common administrative control of subdomains, or lack of such common control, and other related issues). I believe the working group concluded its work without successfully creating a standard for this, though.
The problem is that you then have to trust the site's own statement about whether its subdomains are independent.
Yes, its generally good advice to keep user content on a separate domain.
That said, there are a number of IT professionals that aren't aware of the PSL as these are largely initiatives that didn't exist prior to 2023 and don't get a lot of advertisement, or even a requirement. They largely just started being used silently by big players which itself presents issues.
There are hundreds if not thousands of whitepapers on industry, and afaik there's only one or two places its mentioned in industry working groups, and those were in blog posts, not whitepapers (at M3AAWG). There's no real documentation of the organization, what its for, and how it should be used in any of the working group whitepapers. Just that it is being used and needs support; not something professional's would pay attention to imo.
> Second, they should be using the public suffix list
This is flawed reasoning as is. Its hard to claim this with a basis when professionals don't know about this, a small subset just arbitrarily started doing this, and seems more like false justification after-the-fact for throwing the baby out with the bath water.
Security is everyone's responsibility, and Google could have narrowly tailored the offending domain name accesses instead of blocking the top-level. They didn't do that, and worse that behavior could even be automated in a way that the process could be extended and there could be a noticing period to the toplevel provider before it started hitting everyone's devices. They also didn't do that apparently.
Regardless, no single entity should be able to dictate what other people perceive or see arbitrarily from their devices (without a choice; opt-in) but that is what they've designed these systems to do.
Enumerating badness doesn't work. Worse, say the domain names get reassigned to another unrelated customer.
Those people are different people, but they are still blocked as happens with small mail servers quite often. Who is responsible when someone who hasn't been engaged with phishing is being arbitrarily punished without due process. Who is to say that google isn't doing this purposefully to retain their monopolies for services they also provide.
Its a perilous torturous path where trust cannot be given because they've violated that trust in the past, and have little credibility with all net incentives towards their own profit at the expense of others. They are even willing to regularly break the law, and have never been held to account for it. (i.e. Google Maps WIFI wiretapping).
Hanlon's razor is a joke intended as a joke, but there are people that use it literally and inappropriately to deceitfully take advantage of others.
Gross negligence coupled with some form of loss is sufficient for general intent which makes the associated actions malicious/malice.
Throwing out the baby with the bath water without telling anyone or without warning, is gross negligence.
I'm not sure what to tell you. I'm a professional with nearly two decades of experience in this industry, and I don't read any white papers. I read web publications like Smashing Magazine or CSS Tricks, and more specifically authors like Paul Irish, Jake Archibald, Josh Comeau, and Roman Komarov. Developers who talk about the latest features and standards, and best practices to adopt.
The view that professionals in this industry exclusively participate in academic circles runs counter to my experience. Unless you're following the latest AI buzz, most people are not spending their time on arXiv.
The PSL is surely an imperfect solution, but it's solving a problem for the moment. Ideally a more permanent DNS-based solution would be implemented to replace it. Though some system akin to SSL certificates would be necessary to provide an element of third-party trust, as bad actors could otherwise abuse it to segment malicious activity on their own domains.
If you're opposed to Safe Browsing as a whole, both Chromium and Firefox allow you to disable that feature. However, making it an opt-in would essentially turn off an important security feature for billions of users. This would result in a far greater influx of phishing attacks and the spread of malware. I can understand being opposed to such a filter from an idealistic perspective, but practically speaking, it would do far more harm than good.
You seem to have not understood what I said conflating academia with whitepapers and then construct the rest on an improper foundation from there.
Whitepapers aren't the sole domain of academia. What we are talking about aren't hosted on Arxiv. We are talking about industry working groups.
The M3AAWG working group, and browser/CA forum publish RFCs and Whitepapers that professionals in this area do read regularly.
There's been insufficient/no professional outreach about PSL. You can't just do things at large players without disclosure for interop, because you harm others by doing so neglecting the imposing fallout from lack of disclosure on everyone else that's impacted within your sphere of influence which as a company running the second most popular browser is global.
When you do so without first doing certain reasonable and expected things (of any professional organization), you are being grossly negligent. This is sufficient to prove general intent for malice in many cases, a reasonable person in such circumstances should have known better.
This paves the way for proving tortuous or vexatious interference of a contract which is a tort and punishable by law when brought against the entity.
> The PSL is surely an imperfect solution but its solving the problem for the moment.
It is not, because the disclosure hasn't happened properly for interop, and in such circumstances it predictably creates a mountain of problems without visibility; a timebomb/poison pill where crisis arises from the brittle structure later following shock doctrine utilizing the snowball effect (a common tactic of the corrupt and deceiver alike).
Your entire line of reasoning which you constructed is critically flawed. You presume trust is important to this, and that such systems require trust, but trust doesn't have anything to do with the reputational metrics which the systems we are talking about are using to impose cost. Apples to Oranges.
You can't enumerate badness. Lots of professionals know this. Historic reputational blacklists also punish those that are innocent after-the-fact when not properly disclosed, or engineered for due process. A permanent record deprives anyone from using a blacklisted entry after it changes hands from the criminal to some unsuspecting person.
Your reasoning specifically frames a false dichotomy about security. This follows almost identically the same reasoning the Nazi's used (ref at the bottom).
No one is arguing that Safe Browsing and other mechanisms are useful as mitigation, but they are temporary solutions that must be disclosed to a detailed level that allows interoperability to become possible.
If you only tell your friends, and impose those draconian costs on everyone else, you are abusing your privileged position of trust for personal gain (a form of corruption), and causing harm on others even if you can't see it.
Chrome does not have an opt-out. You have to re-compile the browser from scratch to turn those subsystems off. Same with Firefox. That is not allowing you to disable that feature since users aren't reasonably expected to be able to recompile their software to change a setting.
There is no idealism/pacifism here. I'm strictly being pragmatic.
You neglect the harm you don't directly see, in the costs imposed on business. Second, Third, and n-order effects must be considered but have not been (this must necessarily grow in consideration based on the scope/scale of impact).
There are a few areas where doing such blind things may directly threaten existential matters (i.e. food production where failure of logistics lead to shortages, which whipsaw into chaos). It won't happen immediately, and we live in a growingly brittle but still somewhat resilient society, but it will happen eventually if such harm is adopted and allowed as standard practice; though the method is indirect the scope starts off large.
If you only look through a lens at a small part of the cycle of the dynamics that favors your argument which you set in motion, ignoring everything else; that is called cherry-picking or also commonly known as the fallacy of isolation.
Practically speaking, that line of reasoning is without foundational support and unsound. Its important to properly discern and reason about things as they actually exist in reality.
Competent professionalism is not an idealistic perspective. The harm naturally comes when one doesn't meet well established professional requirements. When the rule of law fails to hold destructive people to account for their actions; that's a three-alarm fire as a warning sign of impending societal collapse. The harms of which are incalculable.
Ref: "Of course the people don't want war. But after all, it's the leaders of the country who determine the policy, and it's always a simple matter to drag the people along whether it's a democracy, a fascist dictatorship, or a parliament, or a communist dictatorship. Voice or no voice, the people can always be brought to the bidding of the leaders.
(Your implications follow this part closely): That is easy. All you have to do is tell them they are being attacked, and denounce the pacifists for lack of patriotism, and exposing the country to greater danger."