While Python being more widely used than JS, it's interesting the majority of attacks and breaches come from NPM. The consensus seems to be that Python offering a standard library greatly reduces the attack surface over JS. I tend to agree with this, a decently large Flask python app I am working on has 15 entries in requirements.txt (many of which being Flask plugins).
The most important packages in the Python world don't have a lot of their own dependencies. Numpy has none, for example. The bulk of Numpy is non-Python code and interfaces/wrappers for that; the standard library isn't AFAIK pulling a whole lot of weight there.
I also think the same. While in Java the stdlib lacks a few functions, long ago Apache Commons became the de-facto complement for the Java stdlib, being replaced/complemented by other libs over time, and eventually even becoming obsolete with newer versions of Java. But I always had the impression that having Apache Software Foundation components (with a good release/security process) helped Java to mitigate a lot of attacks.
Javascript is also hindered by the fact that you have to "pay" for every library you download. This encourages a culture of reinventing the wheel, because "I don't need all that," preventing de-factor stdlib supplements from existing.
The large attack surface with npm is partly because of all the transitive dependencies used, which means that even if you only pull in a dozen packages directly, you're also using hundreds of other packages. Running `pip freeze` will list a lot of transitive dependencies as well, but I'm sure it'll be less than an equivalent JS project.
If you can change a GitHub Actions workflow to exfiltrate a token, what prevents you from changing the workflow that uses Trusted Publishing to make changes to the package before publishing it? Perhaps by adding an innocent looking use of an external Action?
In general, yes, it is easier to exfiltrate the token because if you can control some of the code that runs with the token available as an env var, you can do whatever.
In the specific case of the attack described in the blog post, though, the attackers added an extra GitHub Actions workflow that sent the token to an external server. That means they had enough privileges to change GHA workflows, and could just as easily change a workflow that used Trusted Publishing.
(It may be possible to configure branch protections or rules limiting who/when can trigger the Trusted Publishing workflow, but it's about as difficult as limiting the secret tokens to only be available to some maintainers.)
> Attackers targeted a wide variety of repositories, many of which had PyPI tokens stored as GitHub secrets, modifying their workflows to send those tokens to external servers. While the attackers successfully exfiltrated some tokens, they do not appear to have used them on PyPI.
It's wild to me that people entrust a third-party CI system with API secrets, and then also entrust that same system to run "actions" provided by other third parties.
It’s a good first step, but a significant number of GitHub Actions pull a Docker image from a repository such as Docker Hub. In those cases, the GitHub Action being immutable wouldn’t prevent the downstream Docker image from being mutated.
Huge kudos to Mike for handling this attack and appropriately contacting the maintainers.
I’m also glad to see yet another case where having Trusted Publishing configured would have prevented the attack. That’s a cheap defense that has proven effective once again!
Incident report of a recent attack campaign targeting GitHub Actions workflows to exfiltrate PyPI tokens, our response, and steps to protect your projects.
While Python being more widely used than JS, it's interesting the majority of attacks and breaches come from NPM. The consensus seems to be that Python offering a standard library greatly reduces the attack surface over JS. I tend to agree with this, a decently large Flask python app I am working on has 15 entries in requirements.txt (many of which being Flask plugins).
https://www.sonatype.com/blog/pytorch-namespace-dependency-c...
https://socket.dev/blog/pypi-package-disguised-as-instagram-...
https://socket.dev/blog/monkey-patched-pypi-packages-steal-s...
https://socket.dev/blog/malicious-pypi-package-targets-disco...
https://socket.dev/blog/typosquatting-on-pypi-malicious-pack...
The most important packages in the Python world don't have a lot of their own dependencies. Numpy has none, for example. The bulk of Numpy is non-Python code and interfaces/wrappers for that; the standard library isn't AFAIK pulling a whole lot of weight there.
Numpy depends on BLAS and LAPACK.
while those are obviously huge dependencies, i think the claim was about _python_ dependencies
I also think the same. While in Java the stdlib lacks a few functions, long ago Apache Commons became the de-facto complement for the Java stdlib, being replaced/complemented by other libs over time, and eventually even becoming obsolete with newer versions of Java. But I always had the impression that having Apache Software Foundation components (with a good release/security process) helped Java to mitigate a lot of attacks.
Javascript is also hindered by the fact that you have to "pay" for every library you download. This encourages a culture of reinventing the wheel, because "I don't need all that," preventing de-factor stdlib supplements from existing.
The large attack surface with npm is partly because of all the transitive dependencies used, which means that even if you only pull in a dozen packages directly, you're also using hundreds of other packages. Running `pip freeze` will list a lot of transitive dependencies as well, but I'm sure it'll be less than an equivalent JS project.
If you can change a GitHub Actions workflow to exfiltrate a token, what prevents you from changing the workflow that uses Trusted Publishing to make changes to the package before publishing it? Perhaps by adding an innocent looking use of an external Action?
Nothing.
However, exfiltrating a token is much more easy than modifying the workflow itself. A token is usually simply stored in an env variable.
In general, yes, it is easier to exfiltrate the token because if you can control some of the code that runs with the token available as an env var, you can do whatever.
In the specific case of the attack described in the blog post, though, the attackers added an extra GitHub Actions workflow that sent the token to an external server. That means they had enough privileges to change GHA workflows, and could just as easily change a workflow that used Trusted Publishing.
(It may be possible to configure branch protections or rules limiting who/when can trigger the Trusted Publishing workflow, but it's about as difficult as limiting the secret tokens to only be available to some maintainers.)
> Attackers targeted a wide variety of repositories, many of which had PyPI tokens stored as GitHub secrets, modifying their workflows to send those tokens to external servers. While the attackers successfully exfiltrated some tokens, they do not appear to have used them on PyPI.
It's wild to me that people entrust a third-party CI system with API secrets, and then also entrust that same system to run "actions" provided by other third parties.
it's even worse that that
the CI system itself encourages you to import random third party code into your CI workflow, based on mutable tags
which then receives full privileges
the entire thing is insane
That's why I stick mostly with Github actions and pin the SHA of the commits instead of the tag version.
yes, it supports it, but it's not the default, is a pain and fills your build file with a load of noise
so very few use it
it's not made obvious that the tag isn't immutable
although you might be happy with the contents of what you've imported right now, who says it won't be malicious in a year's time
people inadvertently give full control of their build and all their secrets to whoever controls that repository (now, and in the future)
making it easy to do the right thing is an important part of API design and building secure systems, and these CI systems fail miserably there
Immutable releases are in public preview and hopefully will make it easier to do the right thing.
https://github.blog/changelog/2025-08-26-releases-now-suppor...
I don't see how that solves this problem as long as the attacker can delete and recreate a repository
sigstore's main design goal seems to be to increase the lock-in of of "trusted" providers
(the idea that Microsoft should be trusted for anything requiring any level of security is entirely ludicrous)
It’s a good first step, but a significant number of GitHub Actions pull a Docker image from a repository such as Docker Hub. In those cases, the GitHub Action being immutable wouldn’t prevent the downstream Docker image from being mutated.
Huge kudos to Mike for handling this attack and appropriately contacting the maintainers.
I’m also glad to see yet another case where having Trusted Publishing configured would have prevented the attack. That’s a cheap defense that has proven effective once again!
Incident report of a recent attack campaign targeting GitHub Actions workflows to exfiltrate PyPI tokens, our response, and steps to protect your projects.