Move to the cloud they said. It will be more secure then your intranet they said. Only fools pay for their own Ops team they said.
I’m so old and dumb that I don’t even understand why an app for internal Microsoft use is even accesible from outside its network.
The last decade has seen an increase push in what Google started calling "Zero Trust"[0] and dropping VPNs entirely. The issue being that once someone got into a VPN it was much, much harder to prevent them from accessing important data.
So everything "internal" is now also external and required to have its own layer of permissions and the like, making it much harder for, e.g. the article, to use one exploit to access another service.
[0] https://cloud.google.com/learn/what-is-zero-trust
I don’t see that really as an argument for this. You still should use VPN as an additional layer of security, assuming that you use some proper protocol. Then zero trust applies to internal network.
In the bad old days, if your company-internal tools were full of XSS bugs, fixing them wasn't a priority, because the tools could only be accessed with a login and VPN connection.
So outside attackers have already been foiled, and insider threats have a million attack options anyway, what's one more? Go work on features that increase revenue instead.
In principle the idea of "zero trust" was to write your internal-facing webapps to the same high standards as your externally-facing code. You don't need the VPN, because you've fixed the many XSS bugs.
In practice zero trust at most companies means buying something extremely similar to a VPN.
> In principle the idea of "zero trust" was to write your internal-facing webapps to the same high standards as your externally-facing code. You don't need the VPN, because you've fixed the many XSS bugs.
But why stop there? If these apps are not required to be accessed from public world, by setting up VPN you need to exploit both VPN and and the service to have an impact. Denial of specific service is harder and exploiting known CVEs is harder.
Because the protection that the VPN provides decreases the risk of having bugs to the point where they won't get prioritized, ever.
That is just bad management, to be fair. Companies need to intentionally increase risks before they can fix them?
Eh, at the same time VPNs can be a huge mess of problems on top of all the other problems that exist. Async routing is always a fun one in complex topologies.
Rule #1 of business, government, or education: Nobody, ever, ever, does what they “should.”
Even here: Hacker News “should” support 2 factor authentication, being an online forum literally owned by a VC firm with tons of cash, but they don’t.
Should they? From a threat modeling perspective, what's the consequences for HN of a user having their password compromised? Are those consequences serious enough to warrant the expense and added complexity of adding MFA?
I don't really understand this reasoning.
HN allows for creating a user. HN requires every post and comment to be created by a user. HN displays the user for each post and comment. HN allows for browsing users' post and comment history. HN allows for flagging posts and comments, but only by users. HN allows for voting on posts and comments, but only by users. HN also has some baseline guardrails for fresh accounts. Very clearly, the concept of user accounts is central to the overall architecture of the site.
And you ask if it is in HN's interest to ensure people's user accounts remain in their control? Literally all mutative actions you can take on HN are bound to a user that I can tell, with that covering all content submission actions. They even turn on captchas from time to time for combating bots. [0] How could it not be in their interest to ensure people can properly secure their user accounts?
And if I further extend this thinking, why even perform proper password practices at all (hashing and salting)? Heck, why even check passwords, or even have user accounts at all?
So in my thinking, this is not a reasonable question to ponder. What is, is that maybe the added friction of more elaborate security practices would deter users, or at least that's what [0] suggests to me. But then the importance of user account security or the benefit of 2FA really isn't even a question, it's accepted to be more secure, it's more a choice of giving up on it in favor of some perceived other rationale.
[0] https://news.ycombinator.com/item?id=34312937
TBF I didn't ask if it was in their interests, I asked if the consequences of a password related attack were serious enough to warrant the expense of implementing MFA.
Let's look at some common attacks :-
- Single user has their password compromised (e.g. by a keylogger). Here the impact to HN is minimal, the user may lose their account if they can't get through some kind of reset process to get access to it. MFA may protect against this, depending on the MFA type and the attacker.
- Attacker compromises HN service to get the password database. MFA's not really helping HN here at all and assuming that they're using good password storage processes the attacker probably isn't retrieving the passwords anyway.
- Attacker uses a supply chain attack to get MITM access to user data via code execution on HNs server(s). Here MFA isn't helping at all.
It's important to recognize that secure is not a binary state, it's a set of mitigations that can be applied to various risks. Not every site will want to use all of them.
Implementing mechanisms has a direct cost (development and maintenance of the mechanism) and also an indirect cost (friction for users), each service will decide whether a specific mitigation is worth it for them to implement on that basis.
Whether they are "serious enough" is a perceived attribute, so it is on them to evaluate, not on any one of us. Depending, it could mean a blank check, or a perpetual zero. The way HN is architected (as described prior), and it being a community space, it makes no sense to me not to do it in general, and even considering costs, I'm not aware of e.g. TOTP 2FA being particularly expensive to implement at all.
Certainly, not doing anything will always be the more frugal option, and people are not trading on here, so financial losses of people are not a concern. The platform isn't monetized either. Considering finances is important, but reversing the arrow and using it as a definitive reason to not do something is not necessarily a good idea.
Regarding the threat scenarios, MFA would indeed help the most against credential reuse based attacks, or in cases of improper credential storage and leakage, but it would also help prevent account takeovers in cases of device compromise. Consider token theft leading to compromised HN user account and email for example - MFA involving an independent other factor would allow for recovery and prevent a complete hijack.
yes it would help against some attack scenarios, no argument there. The question is, do HN regard it as sufficiently important. Changing the codebase to implement MFA would at the least require some development effort/additional code, which has a cost. Whilst I'm not privy to HNs development budget, given that it doesn't seem to change much, my guess is they're not spending a lot at the moment...
MFA can also add a support cost, where a user loses their MFA token. If you allow e-mail only reset, you lose some security benefits, if you use backup tokens, you run the risk that people don't store those securely/can't remember where they put them after a longer period.
As there's no major direct impact to HN that MFA would mitigate, the other question is, is there a reputational impact to consider?
I'd say the answer to that is no, in that all the users here seem fine with using the site in its current form :)
Other forum sites (e.g. reddit) do offer MFA, but I've never seen someone comment that they use reddit and not HN due to the relative availability of that feature, providing at least some indication that it's not a huge factor in people's decision to use a specific site.
> what's the consequences for HN of a user having their password compromised
HN does not enforce anonymity, so the identity of some users (many startup owners btw) is tied to their real identities.
A compromised password could allow a bad actor to impersonate those users. That could be used to scam others or to kickstart some social engineering that could be used to compromise other systems.
Indeed a consequence for the individual user could be spammed posts, but for scams, I'd guess that HN would fall back on their standard moderation process.
The question was though, what are the consequences for HN, rather than individual users, as it's HN that would take the cost of implementation.
Now if a lot of prominent HN users start getting their passwords compromised and that leads to a hit on HNs reputation, you could easily see that tipping the balance in favour of implementing MFA, but (AFAIK at least) that hasn't happened.
Now ofc you might expect orgs to be pro-active about these things, but having seen companies that had actual financial data and transactions on the line drag their feet on MFA implementations in the past, I kind of don't expect that :)
I think this conversation would benefit from introducing scale and audience into the equation.
Individual breaches don't really scale (e.g. device compromise, phishing, credential reuse, etc.), but at scale everything scales. At scale then, you get problems like hijacked accounts being used for spam and scams (e.g. you can spam in comment sections, or replace a user's contact info with something malicious), and sentiment manipulation (including vote manipulation, flagging manipulation, propaganda, etc.).
HN, compared to something like Reddit, is a fairly small scale operation. Its users are also more on the technically involved side. It makes sense then that due to the lesser velocity and unconventional userbase, they might still have this under control via other means, or can dynamically adjust to the challenge. But on its own, this is not a technical trait. There's no hard and fast rule to tell when they cross the boundary and get into the territory where adding manpower is less good than to just spend the days or weeks to implement better account controls.
I guess if I really needed to put this into some framework, I'd weigh the amount of time spent on chasing the aforementioned abuse vectors compared to the estimated time required to implement MFA. The forum has been operating for more than 18 years. I think they can find an argument there for spending even a whole 2 week sprint on implementing MFA, though obviously, I have no way of knowing.
And this is really turning the bean counting to the maximum. I'm really surprised that one has to argue tooth and nail about the rationality of implementing basic account controls, like MFA, in the big 2025. Along with session management (the ability to review all past and current sessions, to retrieve an immutable activity log for them, and a way to clear all other active sessions), it should be the bare minimum these days. But then, even deleting users is not possible on here. And yes, I did read the FAQ entry about this [0], it misses the point hard - deleting a user doesn't necessarily have to mean the deletion of their submissions, and no, not deleting submissions doesn't render the action useless; because as described, user hijacking can and I'm sure does happen. A disabled user account "wouldn't be possible" to hijack, however. I guess one could reasonably take an issue with calling this user deletion though.
[0] https://news.ycombinator.com/newsfaq.html
It's interesting you suggest a two week sprint for this. How large do you think HNs development team is, do you know if they even have a single full time developer?
I don't but the lack of changes in the basic functionality of the site in the number of years I've used it make me feel that they may not have any/many full time devs working on it...
I really don't think the site is like this because they lack capacity. It's pretty clearly an intentional design choice in my view, like with Craigslist.
But no, I do not have any information on their staffing situation. I presume you don't either though, do you?
Indeed I don't. However it we examine the pace of new features of the last several years (I can't think of a single way this site has changed over that time period), it's reasonable to surmise that there isn't a lot of development of the user accessible/visible portions of the site, and that leads me to guess that they don't have much in the way of dev. resources.
Oh boy, this should be good. Mark my words, this will be followed by a "proof" of nonexistence, in the following form:
"Well, let's build a list of attacks that I can think of off-the-cuff. And then let's iterate through that list of attacks: For each attack, let's build a list of 'useful' things that attackers could possibly want.
Since I'm the smartest and most creative person on the planet, and can also tell the future, my lists of ideas here will actually be complete. There's no way that any hacker could possibly be smart enough or weird enough to think of something different! And again, since I'm the smartest and most creative --and also, magically able to tell the future-- and since I can't think of anything that would be 'worth the cost', then this must be a complete proof as to why your security measure should be skipped!"
I'm firmly in the pro 2FA camp, but merely as a point of discussion: the Arc codebase is already so underwater with actual features that would benefit a forum, and if I changed my password to hunter2 right now the only thing that would happen is my account would shortly be banned when spammers start to hate-bomb or crypto-scam-bomb discussion threads. Dan would be busy, I would be sad, nothing else would happen
For accounts that actually mean something (Microsoft, Azure, banking, etc), yes, the more factors the better. For a lot of other apps, the extra security is occupying precious roadmap space[1]
1: I'm intentionally side-stepping the "but AI does everything autonomously" debate for the purpose of this discussion
Everyone else: I need unique 128-character passwords for every site I ever visit with unphishable FIDO keys for MFA.
Me: I didn't give the store website permission to save my credit card. If someone logs in, they'll know I ordered pants there.
I am currently having this debate at $DAYJOB, having come from a zero trust implementation to one using fucking Cloudflare Warp. The cost to your "just use a VPN" approach or, if I'm understanding your point correctly, use VPN and zero trust(?!), is that VPNs were designed for on-premises software. In modern times, the number of cases where one needs to perform a fully authenticated, perfectly valid action, from a previously unknown network on previously unconfigured compute is bigger than in the "old days"
GitHub Actions are a prime example. Azure's network, their compute, but I can cryptographically prove it's my repo (and my commit) OIDC-ing into my AWS account. But configuring a Warp client on those machines is some damn nonsense
If you're going to say "self hosted runners exist," yes, so does self-hosted GitHub and yet people get out of the self-hosted game because it eats into other valuable time that could be spent on product features
> is that VPNs were designed for on-premises software.
The way I see this is that VPN is just network extender. Nothing to do with design for on-premise software. By using VPN as an additional layer, most of the vulnerability scanners can’t scan your services anymore. It reduces the likelihood that you are impacted immediately by some publicly known CVEs. That is the only purpose of VPN here.
VPN may also have vulnerabilities, but for making the impact, both VPN and service vulnerability is required at the same time. The more different services/protocols you have behind VPN, more useful it is. It might not make sense if you have just ssh need, for example. Then you have 1:1 protocol ratio, and ssh could be more secure protocol.
In theory, for automated traffic like that you should probably be using a plain Access application with a service token rather than WARP
Does having a VPN/intranet preclude zero trust? It seems you could do both with the private network just being an added layer of security.
It doesn't, but from my perspective the thinking behind zero trust is partly to stop treating networking as a layer of security. Which makes sense to me - the larger the network grows, the harder to know all its entry-points and the transitive reach of those.
A VPN? Yes, by definition. Zero trust requires that every connection is authenticated and users are only granted access to the app they request. They never “connect to the network” - something brokers that connection to the app in question.
VPN puts a user on the network and allows a bad actor to move laterally through the network.
It doesn't have to. There's nothing to stop you using a VPN as an initial filter to reduce the number of people who have access to a network and then properly authenticating and authorizing all access to services after that.
In fact, I'd say is a good defence-in-depth approach, which comes at the cost of increased complexity.
It also prevents the whole world for scanning your outdated public interfaces. Before they can do that, they need to bypass VPN.
If there are tens of different services, is it more likely that one of them has vulnerablity than both VPN and service has? And vulnerability in VPN alone does not matter if your internal network is build like it is facing public world. You might be able to patch it before vulnerability in other services is found.
I’m not saying you can’t have your own definition.
But I am saying that a VPN isn’t zero trust, by the agreed upon industry definition. There’s no way to make a VPN zero trust, and zero trust was created specifically to replace legacy VPNs.
The zero trust architechture implies (read: requires) that authentication occurs at every layer. Token reuse constitutes a replay attack that mandatory authentication is supposed to thwart. Bypass it and the system's security profile reverts back to perimeter security, with the added disadvantage of that perimeter being outside your org's control.
The big problem with the ZT approach is that smaller shops don't have a lot of developers and testers (some maybe with a security inclination) to be certain to a somewhat high degree that their app is written in a secure manner. Or be able to continuously keep abreast of every new security update Microsoft or other IdP makes to their stack.
It is easy for Google/Microsoft and any other FAANG like company to preach about Zero Trust when they have unlimited (for whatever value of unlimited you want to consider) resources. And even then they get it wrong sometimes.
The simpler alternative is to publish all your internal apps through a load balancer / API gateway with a static IP address, put it behind a VPN and call it a day.
> publish all your internal apps through a load balancer / API gateway with a static IP address, put it behind a VPN and call it a day.
Or just use Cognito. It can wrap up all the ugly Microsoft authentication into it's basic OAuth and API Gateway can use and verify Cognito tokens for you transparently. It's as close to the Zero Trust model in a Small Developer Shop we could get.
Zero trust is a good concept turned into a dumb practice. Basically people buying Google's koolaid for this forgot about "defense in depth". Yeah, authenticating every connection is great, throwing a big effing moat around it too is better.
The other thing is most companies are not Google. If you're a global company with hundreds of thousands of people who need internal access, moats may be non-ideal. For a business located in one place, local-only on-premise systems which block access to any country which they don't actively do business with is leaps and bounds better.
> Move to the cloud they said. It will be more secure then your intranet they said. Only fools pay for their own Ops team they said.
It seems that the fundamental issue surfaced in the blog post is that developers who work on authorizarion in resource servers are failing to check basic claims in tokens such as the issuer, the audience, and subject.
If your developers are behind this gross oversight, do you honestly expect an intranet to make a difference?
Listen, the underlying issue is not cloud vs self-hosted. The underlying issue is that security is hard and in general there is no feedback loop except security incidents. Placing your apps in a intranet, or VPN, does nothing to mitigate this issue.
But of course it does provide an additional layer of security that indeed could have reduced the likelihood of this issue being exploited.
For me, the core of the discovered issue was that applications intended purely for use by internal MS staff were discoverable and attackable by anyone on the Internet, and some of those applications had a mis-configuration that allowed them to be attacked.
If all those applications had been behind a decently configured VPN service which required MFA, any attacker who wanted to exploit them would first need access to that VPN, which is another hurdle to cross and would reduce the chance of exploitation.
With a target like MS (and indeed most targets of any value) you shouldn't rely solely on the security provided by a VPN, but it can provide another layer of defence.
For me the question should be, "is the additional security provided by the VPN layer justified against the costs of managing it, and potentially the additional attack surface introduced with the VPN".
I work at a corporate that uses FortiNet. Not just VPN but for AV and web filtering. It aggregates traffic together, increases the attack surface and makes us vulnerable to zero day attacks. All to protect sensitive data that is almost entirely composed of connections of Microsoft software to Microsoft servers. And using all the normal SSO/authorisation stuff. It probably is required from a compliance perspective, but just seems like a massive tradeoff for security .
Everything in security is a tradeoff, and unfortunately compliance risks are real risks :D
That said yep corps over-complicate things and given the number of 0-days in enterprise VPN providers, it could easily be argued that they add more risk than they mitigate.
That's not to say a good VPN setup (or even allow-listing source IP address ranges) doesn't reduce exposure of otherwise Internet visible systems, reducing the likelihood of a mis-configuration or vulnerability being exploited...
Yeah agreed. And some of these products can be configured to be more specific in whitelisting users to particular service. But only if they are actually configured to do that.
"The underlying issue is that security is hard and in general there is no feedback loop except security incidents."
this is tbh, computer architecture is already hard enough and cyber security is like a whole different field especially if the system/program is complex
For me, I don't think that the application is public exposed is really the problem (i.e. not in intranet).
I think the real problem is that these applications (Entra ID) are multi-tenant, rather than a dedicated single-tenant instance.
Here, we have critical identity information that is being stored and shared in the same database with other tenants (malicious attackers). This makes multi-tenancy violations common. Even if Entra ID had a robust mechanism to perform tenancy checks i.e. object belongs to some tenant, there are still vulnerabilities. For example, as you saw in the blog post, multi-tenant requests (requests that span >= 2 tenants), are fundamentally difficult to authorize. A single mistake, can lead to complete compromise.
Compare this to a single tenant app. First, the attacker would need to be authenticated as an user within your tenant. This makes pre-auth attacks more difficult.
I guess the term "defense in depth" has fallen out of fashion?
That is probably still good advice for most companies. Joe's roof fixing business may be the best roof fixing business in 3 states, but would you want them to run their own server for their website, email, and booking?
Anyone who is on this forum is capable of building their own stuff, and running their own server, but that is not most people.