I like the opposite too, -commit or -execute as it is assumed running it with defaults is immutable as the dry run, simplifying validation complexity and making the go live explicit.
I like the opposite too, -commit or -execute as it is assumed running it with defaults is immutable as the dry run, simplifying validation complexity and making the go live explicit.
I've biased towards this heavily in the last 8 or so years now.
I've yet to have anyone mistakenly modify anything when they need to pass --commit, when I've repeatedly had people repeatedly accidentally modify stuff because they forgot --dry-run.
I wouldn’t want most things to work this way:
There is a time and a place for it but it should not be the majority of use cases.Totally agree it shouldn't be for basic tools; but if I'm ever developing a script that performs any kind of logic before reaching out to a DB or vendor API and modifies 100k user records, creating a flag to just verify the sanity of the logic is a necessity.
set -o noclobber
Yep. First thing I do for this kind thing is make a preview=true flag so I don’t accidentally run destructive actions.
Now I like that idea as an environment variable that takes precedence over the command parameters.
For most of these local data manipulation type of commands, I'd rather just have them behave dangerously, and rely on filesystems snapshots to rollback when needed. With modern filesystems like zfs or btrfs, you can take a full snapshot every minute and keep it for a while to negate the damage done by almost all of these scripts. They double as a backup solution too.
I used to have alias rm='rm -i' for a few years to be careful, but I took it out once I realised that I had just begun adding -f all the time
Yeah, but that's because it's implemented poorly. It literally asks you to confirm deletion of each file individually, even for thousands of files.
What it should do is generate a user-friendly overview of what's to be deleted, by grouping files together by some criteria, e.g. by directory, so you'd only need to confirm a few times regardless of how many files you want to delete.
See also rm -I (capital i), which only prompts when deleting directories or >3 files
Even in those basic examples, it probably would be useful. `cp` to a blank file? No problem. `cp` over an existing file? Yeah, I want to be warned.
`rm` a single file? Fine. `rm /`? Maybe block that one.
That last one would error without doing anything anyway because it's not recursive.
Uhuh:
That's a special case that is a) easy to call accidentally from a script when variables end up being unset and b) almost never a sensible thing to do.
—dry-run should default to true
In PowerShell there's a setting for this:
https://learn.microsoft.com/en-us/powershell/module/microsof...
Yeah I'm more of a `--wet-run` `-w` fan myself. But it does depend on how serious/annoying the opposite is.
I've done that, but I hate the term "wet run."
I use "live run" now, which I think gets the point across without being sort of uncomfortable.
--with-danger
--make-it-so
--do-the-thing
--go-nuts
--safety-off
So many fun options.
I'm a fan of --safety-off. It gives off a 'aim away from face' or 'mishandle me and I'll blow a chunk out of your DB' vibe.
I find it important to include system information in here as well, so just copy-pasting an invocation from system A to system B does not run.
For example, our database restore script has a parameter `--yes-delete-all-data-in` and it needs to be parametrized with the PostgreSQL cluster name. So a command with `--yes-delete-all-data-in=pg-accounting` works on exactly one system and not on other systems.
It's in the UI not the command line, but I like Chromium's thisisunsafe
I've done a few --execute --i-know-what-im-doing for some more dangerous scripts
May I recommend --I-take-responsibility-for-the-outcome-of-proceeding and require a capital I?
--commit is solid too
Moist run is the way.
This is something I learnt here.
My latest script which deletes the entire content of a downloaded Sharepoint (locally only) and the relevant MS365 account from the computer runs by default in a read-only mode. You have to run it with an explicit flag to allow for changes.
Also, before it actually deletes the account, you need to explicitly type DELETE-ACCOUNT in order to confirm that this is indeed your intent.
So far, nobody managed to screw up, even in heated situations at client's place.
There was a tool I used some time ago that required typing in a word or phrase to acknowledge that you know it's doing the run for real.
Pros and cons to each but I did like that because it was much more difficult to fat finger or absentmindedly use the wrong parameter.
Github will do that when you delete a repo.
I have a few shell scripts using `getopts` that have a `-!` flag to make it go; the default is dry-run.
And it's pretty nice. The downside is if you get used to that behavior in things that don't have it the consequences can be bad. (Like the common `alias rm='rm -i'`. No, that's just a trap; don't do it.)
I have a parallel directory deduper that uses hard links and adopted this pattern exactly.
By default it'll only tell you which files are identical between the two parallel directory structures.
If you want it to actually replace the files with hard links, you have to use the --execute flag.
rmlint will only run a previous dry-run snapshot.
Just don’t randomly mix and match the approaches or you are in for a bad time.
Agree with this tactic, very useful when working in a team. I use a --dry-run=false approach.