It is unfortunate, but this is evidence (IMO) that Trusted Publishing is still ~~not secure~~ not enough by itself to securely publish from CI, as an attacker inside your CI pipeline or with stolen repo admin creds can easily publish. This isnt new information, TP is not meant to guarantee against this, but migrating to TP away from local publish w/ 2fa introduces this class of attack via compomise of CI. (edit: changed "still not secure" to "still not enough by itself" bc that is the point I want to make)
Going to Trusted Publishing / pipeline publishing removes the second factor that typically gates npm publish when working locally.
The story here, while it is evolving, seems to be that the attacker compromised the CI/CD pipeline, and because there is no second factor on the npm publish, they were able to steal the OIDC token and complete a publish.
Interesting, but unrelated I suppose, is that the publish job failed. So the payload that was in the malicious commit must have had a script that was able to publish itself w/ the OIDC token from the workflow.
What I want is CI publishing to still have a second factor outside of Github, while still relying on the long lived token-less Trusted Publisher model. AKA, what I want is staged publishing, so someone must go and use 2fa to promote an artifact to published on the npm side.
Otherwise, if a publish can happen only within the Github trust model, anyone who pwns either a repo admin token or gets malicious code into your pipeline can trivially complete a publish. With a true second factor outside the Github context, they can still do a lot of damage to your repo or plant malicious code, but at least they would not be able to publish without getting your second factor for the registry.
The astral blog recently pointed out how they do release gates (manual approvals on release workflows) even with trusted publishing. And sadly, all of the documentation for trusted publishing (NPM/PyPi/Rubygems) doesn't even mention this possibility, let alone defaulting to it.
I have not read that blog post. But unfortunately (and I'd love to be wrong!) it doesn't matter for if a repo admin's token gets exfiled, because if you put your gates within Github, an admin repo token is sufficient to defang all of them from the API without 2fa challenge.
That is why I want 2fa before publish at the registry, because with my gh cli token as a repo admin, an attacker can disable all the Github branch protection, rewrite my workflows, disable the required reviewers on environments (which is one method people use for 2fa for releases, have workflows run in a GH environment whcih requires approval and prevents self review), enable self review, etc etc.
Its what I call a "fox in the hen house" problem, where you have your security gates within the same trust model as you expect to get stolen (in this case, having repo admin token exfiled from my local machine)
Exfiltrating an admin token is a big "if"; you shouldn't issue admin tokens at all, and GitHub does (at least for me) pop a proper MFA challenge when attempting to issue one.
(I wrote that Astral post.)
Edit: separately, I'll note that the risk of long-lived, highly privileged credentials is the primary motivating reason for Trusted Publishing: a developer's machine has (by necessity) a much higher degree of access than an ephemeral runner does, making it a much juicier target for an attacker. It also runs all kinds of stuff in a mostly unsandboxed manner, making it easier (in principle) to exploit. That's not to say there shouldn't be additional guards on publishing, but that I'm not remotely convinced that local publishing is any better by default.
https://docs.github.com/en/actions/how-tos/deploy/configure-... is the feature they use.
> We impose tag protection rules that prevent release tags from being created until a release deployment succeeds, with the release deployment itself being gated on a manual approval by at least one other team member. We also prevent the updating or deletion of tags, making them effectively immutable once created. On top of that we layer a branch restriction: release deployments may only be created against main, preventing an attacker from using an unrelated first-party branch to attempt to bypass our controls.
> https://astral.sh/blog/open-source-security-at-astral
From what I understand, you need a website login, and not a stolen API token to approve a deployment.
But I agree in principle - The registry should be able to enforce web-2fa. But the defaults can be safer as well.
I tested approving a deployment via API last week w/ my gh cli token (well, had claude do it while I watched). Again, I really want to be wrong about this, but my testing showed that it is indeed trivial to use the default token from my gh cli to approve via API. (repo admin scope, which I have bc I am admin on said repo)
Nothing in this link [1] proves what I said, but it is the test repo I was just conducting this on, and it was an approval gated GHA job that I had claude approve using my GH cli token
I also had claude use the same token to first reconfigure the enviornment to enable self-approves (I had configured it off manually before testing). It also put it back to self approve disabled when it was done hehe
[1] https://github.com/jonchurch/deploy-env-test/actions/runs/25...
You're right. Found the relevant docs+API calls:
https://docs.github.com/en/rest/actions/workflow-runs?apiVer...
Also for a Pending Deployment: https://docs.github.com/en/rest/actions/workflow-runs#review...
Both of these need `repo` scope, which you can avoid giving on org-level repos. For fine-grained tokens: "Deployments" repository permissions (write) is needed, which I wouldn't usually give to a token.
sigh Github's idiotic fractal of authentication types.
What upthread is talking about is the Github CLI app, `gh`; it doesn't use a fine-grained tokens, it uses OAuth app tokens. I.e., if you look at fine grain tokens (Setting → Developer settings → Personal access tokens → Fine-grained token), you will not see anything corresponding to `gh` there, as it does not use that form of authentication. It is under Settings → Applications → Authorized OAuth Apps as "Github CLI".
I just ran through the login sequence to double-check, but the permissions you grant it are not configurable during the login sequence, and it requests an all-encompassing token, as the upthread suggests.
Another way to come at this is to look at the token itself: gh's token is prefixed with `gho_` (the prefix for such OAuth apps), and fine-grained tokens are prefixed with `github_pat_` (sic)¹
¹(PATs are prefixed with `ghp_`, though I guess fine-grained tokens are also sometimes called fine-grain PATs… so, maybe the prefix is sensible.)
I’m paranoid but I never authenticate the GitHub CLI - there should be no tokens lying around on my system. If needed, I have some scoped PATs in pass, which I can source as env variables. Git Pushes happen over SSH with Yubikey.
I'd like to have touch to sign from a YubiKey or similar. The whole idea of trusting the cloud to manage credentials on your behalf seems like a mistake.
> The whole idea of trusting the cloud to manage credentials on your behalf seems like a mistake.
Isn't this what the "trusted" in "trusted publishing" implies? Maybe you're saying that trusted publishing itself seems like a mistake, but if so you don't need to use it: you can publish your packages the old-fashioned way and npm will make you go through the 2fa flow.
”TanStack maintainer Tanner Linsley said the attacker used an orphaned commit to gain access to the workflow run that stores the OIDC token, effectively bypassing the project’s existing publishing protections. He noted that two-factor authentication is enabled for everyone on the team”
2fa being enabled for people on the team is different from 2fa being required for publishing. It is not current possible to enforce (or use) 2fa for publishing with trusted publishing.
Apologies if this is a dumb question but how does this attack work? (I know what an orphaned commit is but not how you use one to bypass project access control).
TLDR is that the attacker leveraged actions/cache to cache a poisoned pnpm store which contains something that will be triggered during the package.json lifecycle. All it required was for someone to merge any PR to run whats in the cache trigger the second stage of the exploit: mint an OIDC token, build evil tarballs, and publish.
github holding on to orphaned commits has been a noted issue for a while now
It’s a wonderful feature when you accidentally nuke your one and only local copy.
Depending on how badly you nuked it, it's probably still in your `git reflog` locally. Normal git hangs on to orphaned commits too. (Until `git gc` runs)
Yeah I have one semi-popular package and I am still doing local publish with 2fa because all this "trusted publishing" stuff seems really complicated and also seems to get hacked constantly. Maybe it's just too complicated for us to do securely and we should go back to the drawing board.
I still think that Trusted Publishing is a significant win but I do like the idea of requiring a second factor to mark a release as truly published. It would make these CI worms very hard to pull off.
The way I see it - if you're pushing a change to an NPM package with more than [N] daily downloads/downstream packages, and you don't have a human online who's able to approve a two-factor for the release on their phone... then you also don't have a human online who's able to hotfix or rollback in case of a breaking bug, much less a compromise. Even setting security aside - that's in service of a stable ecosystem.
And the two-factor approver should see a human-written changelog message alongside an AI summary of what was changed, that goes deeply into any updated dependencies. No sneaking through with "emergency bugfix" that also bumps a dependency that was itself social-engineered. Stop the splash radius, and disincentivize all these attacks.
Edit: to the MSFT folks who think of the stock ticker name first and foremost - you'd be able to say that your AI migration tools emit "package suggestions that embed enterprise-grade ecosystem security" when they suggest NPM packages. You've got customers out there who still have security concerns in moving away from their ancient Java codebases. Give them a reason to trust your ecosystem, or they'll see news articles like this one and have the opposite conclusion.
I was always confused at why people claimed trusted publishing would make any difference to this kind of supply chain attack.
Because it does. The attack has to involve the CI pipeline rather than the dev environment, there's no token to revoke after (if you evict the attacker you're done, the OIDC credentials expire), it's easier to monitor for externally, you can build things like branch protections in and isolate things like "run tests" from "publish", etc. Trusted Publishing is not itself a solution to all supply chain issues but it is a massive improvement.
I agree with you that TP is an improvement over long lived npm tokens in CI.
However, the threat Im most afraid of still does involve dev environment compromise. Because if your repo admin gets their token stolen from their gh cli, they can trivially undo via API (without a 2fa gate!) any github level gate you have put in place to make TP safe. I want so badly to be wrong about that, we have been evaluating TP in my projects and I want to use it. But without a second factor to promote a release, at the end of the day if you have TP configured and your repo admin gets pwned, you cannot stop a TP release unless you race their publish and disable TP at npm.
TP is amazing at removing long lived npm tokens from CI, but the class of compromise that historically has plagued the ecosystem does not at all depend on the token being long lived, it depends on an attacker getting a token which doesnt require 2fa.
I am begging for someone to prove me wrong about this, not to be a shit, but because I really want to find a secure way to use TP in lodash, express, body-parser, cors, etc
Yes, that is the threat I'm most worried about as well. But look at your description of it - a repo admin has to be compromised. Not just "random engineer". Although, in this case, the attacker leveraged a cache poisoning attack to move into the privileged workflow and I suspect this sort of thing will be commonplace.
I'm in agreement that a second factor would be ideal, to be clear. I think it's a good idea, something like "package is released with Trusted Publishing, then 'marked' via a 2FA attestation". But in theory that 2FA is supposed to be necessary anyways since you can require a 2FA on Github and then require approvals on PRs - hence the cache poisoning being required.
Not to beat the dead horse, but ths floored me when I realized it so I keep trying to shout it at the top of my lungs.
There is no gate you can put on a Trusted Publisher setup in github which requires 2fa to remove. Full stop. 2fa on github gates some actions, but with a token with the right scope you can just disable the gating of workflow-runs-on-approve, branch protection, anything besides I think repo deletion and renaming.
And in my experience most maintainers will have repo admin perms by nature of the maintainer team being small and high trust. Your point is well taken, however, that said stolen token does need to have high enough privileges. But if you are the lead maintainer of your project, your gh token just comes with admin on your repo scope.
I'm looking forward to the analysis how the attacker managed to compromise CI. I was reading through the workflow and what immediately jumped out was a cache poisoning attack. Seems plausible, given https://github.com/TanStack/config/pull/381
edit: two hard things in computer science: naming things, cache invalidation, off-by-one errors, security. something something
Yes it is a GitHub actions cache poisoning attack
Almost all these recent compromises seem to involve either cache poisoning or prompt injection via untrusted variables.
I use GitHub environments to require a manual approval (which includes MFA) in GitHub, prior to a pipeline running with a oidc token capable of publishing.
Would this have caught the cache poisoning? Unsure, though it at least means I'm intentionally authorising and monitoring each publish for anything unexpected.
https://docs.github.com/en/actions/deployment/targeting-diff...
Yeah, it's kinda weird - it's not like GitHub uses a particular secure stack, formal verification or anything. It's just a regular build server with a power to compromise millions of software packages.
Bitcoin people solved problem a decade ago with deterministic build: Bitcoin core is considered publisher when 5+ devs get bit-exact build artifact, each individually signing a hash. Replicating that model isn't hard, it's just that nobody cares. People just want to trust the cloud because it's big
[dead]