It seems like you're reading things that people aren't writing.
I don't know how the author's company manages their stack, so I can't speak to how they do their testing. But I do know that in many companies run-time environment management in production is not owned by engineering and it's common for ops and developers to use different methods to install run-time dependencies in the CI environment and in the production environment. In companies that work that way, testing changes to the production runtime environment isn't done in CI; it's done in staging.
If that's at all representative of how they work, then "we didn't test this with the automated tests that Engineering owns as part of their build" does not in any way imply, "we didn't test this at all."
Tangentially, the place I worked that maintained the highest quality and availability standards (by far) did something like this, and it was a deliberate reliability engineering choice. They wanted a separate testing phase and runtime environment management policy that developers couldn't unilaterally control as part of a defense in depth strategy. Jamming everything into a vertically integrated, heavily automated CI/CD pipeline is also a valid choice, but one that has its roots in Silicon Valley culture, and therefore reaches different solutions to the same problems compared to what you might see in older industries and companies.