Rereading that section, I'd agree it's probably not the best-argued point because it implies security concerns... I guess what I'm saying is: for something I'm setting up to keep around for a while, I'd like to know a bit what's in the package before I deploy it. In that sense, the shell script serves as a table of contents... and if the table of contents is 800 lines, that makes me wonder how many moving parts there are and how many things might break at inconvenient times because of that.

For me I would just run it on a clean cluster/VM somewhere (to be destroyed after that) just to see what happens. If you have no local resources to spare, an hour of even very high end (to save time) VMs/cluster at a provider e.g. AWS costs next to nothing

That solution didn't apply for me at the time, since I was in an environment that combined security-consciousness with thick layers of bureaucracy, meaning that hardware came at a premium (and had to be on premise).

Sure, but I'm not suggesting running there, just testing there. We also have to run in specific providers in specific location, but nothing stops us from renting a clean large VM in AWS for an hour or two, for testing stuff without using any customer data. Hell, that costs pretty much nothing so if my employer didn't allowed it, I would just pay with my own money - it's much better for your work efficiency to work out the kinks of this without having to do 10 cleanups after failed deployment, it's much easier than to just delete a VM.