I worked for one company where we were super conservative. Every external component was versioned. Nothing was updated without review and usually after it had plenty of soak time. Pretty much everything built from source code (compilers, kernel etc.). Builds [build servers/infra] can't reach the Internet at all and there's process around getting any change in. We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them.
Then I moved to another company where we had builds that access the Internet. We upgrade things as soon as they come out. And people think this is good practice because we're getting the latest bug fixes. CVEs are reviewed by a security team.
Then a startup with a mix of other practices. Some very good. But we also had a big CVE debt. e.g. we had secure boots on our servers and encrypted drives. We had a pretty good grasp on securing components talking to each other etc.
Everyone seems to think they are doing the right thing. It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues. We as an industry could really use a better set of practices. Example #1 for me is better in terms of dependency management. In general company #1 had well established security practices and we had really secure products.
You forgot case #4: Worked at a startup where the frontend team thought it was a good idea to use lock files during development, but to do a "fresh" install of all dependecies during the deployment step.
And yes, they still thought they were doing the right thing.
To be fair npm makes (made?) it weirdly hard to use lock files so a lot of people did that by mistake. And when you do use lock, it reinstalls every time so a retagged package can just silently update.
FYI a retagged package would result in a different SHA512 integrity sum and fail the installation process. It won't "just silently update".
Anyway, the point of parent and me wasn't that it was considered to be a "mistake", but people thinking they "are doing the right thing".
doesn't `npm ci` prevent that? it fails if something doesn't match the lockfile, and wipes node_modules before running
this is on some ancient node 16 build i was trying to clean up ci for, so not very recent npm
npm ci does indeed prevent that. The issue isn't really with npm in specific. Rather, it's with build tools like Microsoft's Oryx, which get pushed in GitHub Actions if you're using Azure App Service. That one by default uses `npm install` on older versions (it's been changed nowadays, but Azure's generated action files have a bad habit of generating with older versions of the actions they're using), even though it's specifically meant for CI usage.
In general, use of npm ci is usually sparsely documented - most node projects you can find just recommend using npm install during the setup, suggesting a failure in promoting it's availability (I only know of it because I got frustrated that the lockfile kept clogging up git commits whenever I added dependencies with what looked like auto-generated build-time junk).
I can’t comment on the behavior of ancient npm versions, but with modern npm I would not even know how to skip using a lockfile.
As for the parent comment about not using the lockfile for the production build, that’s just incredibly incompetent.
Maybe they should hire someone who knows what they are doing. Contrary to the popular beliefs of backend engineers online, you also need some competency to do frontend properly.
In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.
Pnpm will also do that automatically if the CI environment variable is set.
> In this case what’s needed is „npm ci“ instead of „npm install“ or better „pnpm install —frozen-lockfile“.
The grugbrain developer says, "I can use git-add to keep a version controlled copy of the library in my app's source tree with no extra steps after git-clone."
(Pop quiz: what problem were the creators of NPM's lockfile format trying to solve?)
That breaks if the library uses build scripts, like for setting up native binaries, or native modules linked against the specific Node version.
If you want a vendored deps model you can look at Yarn Plug and Play which does this via .zip files.
However, I would just stick with regular pnpm and installs.
Lock files were begrudgingly introduced after people who aren’t playing around with move fast and break things cried foul about dependencies being updated unexpectedly. The “semantic versioning” dogma and the illusion of safety that it brings was the original motivation. At NPM’s creation time, mature dep management ecosystems did not have floating versions, they were always pinned.
When you are talking about checking your dependencies in the source tree, you are effectively pinning exact versions, and not using floating/tilde versioning syntax.
This is one of those bizarre "how did you even get that idea" mistakes that ironically replacing developers with AI slop farmers might actually improve on. If you ask Claude to set up a project with NPM and CI, it's not going to do weird shit like that.
I asked Claude to set up a new NPM project and it configured the install task as “npm ci || npm install”, which is stupid. That was on Opus4.7 xhigh. When I pointed out that doing so defeats the purpose, it said “oh yeah of course.”
Turns out there is no equivalent to “npm ci” that doesn’t clear node_modules first, and you can’t call npm install to simulate NPM ci behavior (sans clean).
I would rather work with a company that updates continuously, while also building security into multiple layers so that weaknesses in one layer can be mitigated by others.
For example, at one company I worked for, they created an ACL model for applications that essentially enforced rules like: “Application X in namespace A can communicate with me.” This ACL coordinated multiple technologies working together, including Kubernetes NetworkPolicies, Linkerd manifests with mTLS, and Entra ID application permissions. As a user, it was dead simple to use and abstracted away a lot of things i do not know that well.
The important part is not the specific implementation, but the mindset behind it.
An upgrade can both fix existing issues and introduce new ones. However, avoiding upgrades can create just as many problems — if not more — over time.
At the same time, I would argue that using software backed by a large community is even more important today, since bugs and vulnerabilities are more likely to receive attention, scrutiny, and timely fixes.
> Everyone seems to think they are doing the right thing
I like to think people would agree more on the appropriate method if they saw the risk as large enough.
If you could convince everyone that a nuclear bomb would get dropped on their heads (or a comparably devastating event) if a vulnerability gets in, I highly doubt a company like #2 would still believe they're doing things optimally, for example.
> if they saw the risk as large enough.
If you expose people to the true risks instead of allowing them to be ignorant, the conclusion that they might come to is that they shouldn’t develop software at all.
Really? You think the alternate mode where you're running 5-year-old versions of stuff with tons of known security flaws is better?
What part of "We reviewed all relevant CVEs as they came out to make a call on if they apply to us or not and how we mitigate or address them" gave you that impression?
>running 5-year-old versions of stuff with tons of known security flaws
No one in this thread proposed that, or anything that could be reasonably assumed to have meant that.
> It's impossible to convince the "frequent upgrader" that maybe that's a risk in terms of introducing new issues
I would count myself as a "frequent upgrader" - I admin a bunch of Ubuntu machines and typically set them to auto-update each night. However, I am aware of the risks of introducing new issues, but that's offset by the risks of not upgrading when new bugs are found and patched. There's also the issue of organisations that fall far behind on versions of software which then creates an even bigger problem, though this is more common with Windows/proprietary software as you have less control over that. At least with Linux, you can generally find ways to install e.g. old versions of Java that may be required for specific tools.
There's no simple one-size-fits-all and it depends on the organisation's pool of skills as to whether it's better to proactively upgrade or to reluctantly upgrade at a slower pace. In my experience, the bugs introduced by new versions of software are easier to fix/workaround than the various issues of old software versions.
Do you ride an R1?