I automate updates with a cooldown, security scanning, and the usual tests. If it passes all that I don't worry about merging it. When something breaks, it is usually because the tests were not good enough, so I fix them. The next step up would be to deploy the update into a canary cluster and observe it for a while. Better that than accrue tech debt. When you update on "your schedule" you still should do all the above, so why not just make it robust enough to automate? Works for me.
For regular updates, because you can minimize but not eliminate risk. As I say in the article that might or might not work for your requirements and practices. For libraries, you also cause compounding churn for your dependents.
For security vulnerabilities, I argue that updating might not be enough! What if your users’ data was compromised? What if your keys should be considered exposed? But the only way to have the bandwidth to do proper triage is by first minimizing false positives.
>For libraries, you also cause compounding churn for your dependents.
This is the thing that I don't really understand but that seems really popular and gaining. The article's section "Test against latest instead of updating" seems like the obvious thing to do, as in, keep a range of compatible versions of dependencies, and only restrict this when necessary, in contrast to deployment- or lockfile-as-requirement which is restricted liberally. Maybe it's just a bigger deal for me because of how disruptive UI changes are.