These may be objectively superior (I haven't tested), but I have come to realize (like so many others) that if you ever change your OS installation, set up VMs, or SSH anywhere, preferring these is just an uphill battle that never ends. I don't want to have to set these up in every new environment I operate in, or even use a mix of these on my personal computer and the traditional ones elsewhere.
Learn the classic tools, learn them well, and your life will be much easier.
Some people spend the vast majority of their time on their own machine. The gains of convenience can be worth it. And they know enough of the classic tools that it's sufficient in the rare cases when working on another server.
Not everybody is a sysadmin manually logging into lots of independent, heterogeneous servers throughout the day.
Yeah, this is basically what I do. One example: using neovim with bunch of plugins as a daily driver, but whenever I enter a server that doesn't have it nor my settings/plugins, it isn't a huge problem to run vim or even vi, most stuff works the same.
Same goes for a bunch of other tools that have "modern" alternatives but the "classic" ones are already installed/available on most default distribution setups.
Also that workflow of SSH'ing into a machine is becoming rarer. Nowadays systems are so barren they don't even have SSH.
Someone might have ssh access, just not you :) VPS' will still be VPSing, even though people tend to go for managed Kubernetes or whatever the kids are doing today. But if you're renting instances/"machines", then you're most likely still using ssh.
That's a cute thought not grounded in reality.
The infra may be cattle but debugging via anal probe err SSH is still the norm.
Some are so vastly better that it's worth whatever small inconvenience comes with getting them installed. I know the classic tools very well, but I'll prefer fd and ripgrep every time.
For my part, the day I was confused why "grep" couldn't find some files that were obviously there, only to realize that "ripgrep" is ignoring files in the gitignore, that was the day I removed "ripgrep" of my system.
I never asked for such behaviour, and I have no time for pretty "modern" opinions in a base software.
Often, when I read "modern", I read "immature".
I am not ready to replace my stable base utilities for some immature ones having behaviour changes.
The scripts I wrote 5 years ago must work as is.
You did ask for it though. Because ripgrep prominently advertises this default behavior. And it also documents that it isn't a POSIX compatible grep. Which is quite intentional. That's not immature. That's just different design decisions. Maybe it isn't the software you're using that's immature, but your vetting process for installing new tools on your machine that is immature.
Because hey guess what: you can still use grep! So I built something different.
Sounds like the problem you have here is that `grep` is aliased to `ripgrep`. ripgrep isn't intended to be a drop-in replacement for POSIX grep, and the subjectively easier usage of ripgrep can never replace grep's matureness and adoption.
Note: if you want to make ripgrep not do .gitignore filtering, set `RIPGREP_CONFIG_PATH` to point to a config file that contains `-uu`.
Sources:
- https://github.com/BurntSushi/ripgrep/blob/master/GUIDE.md#c...
- https://github.com/BurntSushi/ripgrep/blob/master/GUIDE.md#a...
So I stand corrected. I did indeed use ripgrep as a drop-in replacement.
That's on me!
I've been playing around with this over the years and this is what I put in my .rgrc:
--smart-case --no-messages --hidden --ignore-vcs
and then point to it with
.zshenv 3:export RIPGREP_CONFIG_PATH="$HOME/.rgrc"
Not perfect and sometimes I reach for good old fashioned escaped \grep but most of the time it's fine.
The very first paragraph in ripgrep's README makes that behaviour very clear:
> ripgrep is a line-oriented search tool that recursively searches the current directory for a regex pattern. By default, ripgrep will respect gitignore rules and automatically skip hidden files/directories and binary files. (To disable all automatic filtering by default, use rg -uuu.)
https://github.com/BurntSushi/ripgrep
+100
One of the reasons I really like Nix, my setup works basically everywhere (as long the host OS is either Linux or macOS, but those are the only 2 environments that I care). I don't even need root access to install Nix since there are multiple ways to install Nix rootless.
But yes, in the eventual case that I don't have Nix I can very much use the classic tools. It is not a binary choice, you can have both.
are you going to install nix in a random docker container?
That is why I said that I still know how to use basic Unix tools. If I am debugging something so frequently that I feel that I need to install my Nix configuration just to get productive there is something clearly going wrong.
For example, in $CURRENT_JOB we have a bastion host that gives access to the databases (not going to discuss if this is a good idea or not, this is how my company does). 90% of time I can do whatever I need just with what the bastion host offers (that doesn't have Nix), if I need to do further analysis I can copy some files between the bastion host and my computer to do further analysis.
mise is a good middle ground.
I am not sure how mise would be a "good middle ground" compared to Nix, considering it is really easy to get a static binary version of Nix. Nowadays it even works standalone without creating a `/nix` directory, you can simply run the binary and it will create everything you need in `~/.local/state/nix` if I remember correctly. And of course Nix is way more powerful than mise.
> that if you ever change your OS installation
apt-get/pacman/dnf/brew install <everything that you need>
You'll need install those and other tools (your favorite browser, you favorite text editor, etc) anyway if you're changing your OS.
> or SSH anywhere
When you connect through SSH you don't have GUI and that's not a reason for avoiding using GUI tools, for example.
> even use a mix of these on my personal computer and the traditional ones elsewhere
I can't see the problem, really. I use some of those tools and they are convenient, but it doesn't matter that I can't work without that. For example, bat: it doesn't replace cat, it only outputs data with syntax highlight, makes my life easier but if I don't have it, ok.
> apt-get/pacman/dnf/brew install <everything that you need>
If only it were so simple. Not every tool comes from a package with the same name, (delta is git-delta, "z" is zoxide, which I'm not sure I'd remember off the top of my head when installing on a new system). On top of that, you might not like the defaults of every tool, so you'll have config files that you need to copy over or recreate (and hopefully sync between the computers where you use these tools).
That said I do think nix provides some good solutions for this. It gives you a nice clean way to list the packages you want in a nixfile and also to set their defaults and/or provide some configuration files. It does still require some maintenance (and I choose to install the config files as editable, which is not very nix-y, but I'd rather edit it and then commit the changes to my configs repo for future deploys than to have to edit and redeploy for every minor or exploratory change), but I've found it's much better than trying maintain some sort of `apt-get install [packages]` script.
After installing it, git clone <dotfiles repo> and then stow .
Chezmoi makes this really easy: https://www.chezmoi.io/
I only skimmed through the website, but it looks like it only does dotfiles. So I'd need to maintain a separate script to keep my packages in sync. And a script with install commands wouldn't be enough - maybe I decided to stop using abcxyz, I'd like for the script to remove it. Versioning between the package and the dotfile can also sometimes be an issue.
it take less than a sec or less than 10s with a google search to adapt...
Stronhly agreed. I don't understand why I'd want to make >99% of my time doing things less convenient in offer to try to make my usage in the <1% of the time I'm on a machine where I can't install things even in a local directory for the user I'm ssh'd into feel less bad by comparison. It's not even a tradeoff where I'm choosing which part of the curve to optimize for; it's literally flattening the high part to make the lower overall convenience level constant.
> When you connect through SSH you don't have GUI and that's not a reason for avoiding using GUI tools, for example.
One major difference can emerge from the fact that using a tool regularly inevitably builds muscle memory.
You’re accustomed to a replacement command-line tool? Then your muscle memory will punish you hard when you’re logged into an SSH session on another machine because you’re going to try running your replacement tool eventually.
You’re used to a GUI tool? Will likely bite you much less in that scenario.
> You’re accustomed to a replacement command-line tool?
Yes.
> Then your muscle memory will punish you hard
No.
I'm also used to pt-br keyboards, it's easier to type in my native language, but it's ok if I need to use US keyboards. In terms of muscle memory, keyboards are far harder to adapt.
A non-tech example: if I go to a Japanese restaurant, I'll use chopsticks and I'm ok with them. At home, I use forks and knives because they make my life easier. I won't force myself to use chopsticks everyday only for being prepared for Japanese restaurants.
> You'll need install those and other tools (your favorite browser, you favorite text editor, etc) anyway if you're changing your OS.
The point is that sometimes you're SSHing to a lightweight headless server or something and you can't (or can't easily) install software.
Because 'sometimes' doesn't mean you should needlessly handcuf yourself the other 80% of the time.
I personally haves an ansible playbook to ~setup all my commonly used tooling on ~any cli I will use significantly; (almost) all local installs to avoid need for root. It runs in ~minute - and I have all the Niceties. If it's not worth spending that minute to run; then i won't be on the machine long enough for it to matter.
That's a niche case. And if you need to frequently SSH into a lightweight server you'll probably will be ok with the default commands even though you have the others installed in the local setup.
It does seem like a lot of these tools basically have the same “muscle memory” options anyway.
That goes against the UNIX philosophy IMO. Tools doing "one thing and doing it well" also means that tools can and should be replaced when a superior alternative emerges. That's pretty much the whole point of simple utilities. I agree that you should learn the classic tools first as it's a huge investment for a whole career, but you absolutely should learn newer alternatives too. I don't care much for bat or eza, but some alternatives like fd (find alt) or sd (sed alt) are absolute time savers.
> Learn the classic tools, learn them well, and your life will be much easier.
Agreed, but that doesn't stop you from using/learning alternatives. Just use your preferred option, based on what's available. I realise this could be too much to apply to something like a programming language (despite this, many of us know more than one) or a graphics application, but for something like a pager, it should be trivial to switch back and forth.
And when those classic tools need a little help:
Awk and sed.
I like the idea of new tools though. But knowing the building blocks is useful. The “Unix power tools” book was useful to get me up to speed.. there are so many of these useful mini tools.
Miller is one I’ve made use of (it also was available for my distro)
I do prefer some of these tools, due to a much better UX, but the only one I do install in every unix box is ripgrep.
I wanted to say we should just stick with what Unix shipped forever. But doesn't GNU already violate that idea?
IMO this is very stupid: don't let past dictate future. UNIX is history. History is for historians, it should not be the basis that shapes the environment for engineers living in present.
The point is that we always exist at a point on a continuum, not at some fixed time when the current standard is set in stone. I remember setting up Solaris machines in the early 2000s with the painful SysV tools that they came with and the first thing you would do is download a package of GNU coreutils. Now those utils are "standard", unless of course you're using a Mac. And newer tools are appearing (again, finally) and the folk saying to just stick with the GNU tools because they're everywhere ignore all of the effort that went into making that (mostly) the case. So yes, let's not let the history of the GNU tools dictate how we live in the present.
Well, even “Unix” had some differences (BSD switches vs SysV switches). Theoretically, POSIX was supposed to smooth that out, but it never went away. Today, people are more likely to be operating in a GNU Linux environment than anything else (that just a market share fact, not a moral judgement, BSD lovers). Thus, for most people, GNU is the baseline.
I tend to use some of these "modern" tools if they are a drop-in replacement for existing tools.
E.g. I have ls set up aliased to eza as part of my custom set of configuration scripts. eza pretty much works as ls in most scenarios.
If I'm in an environment which I control and is all configured as I like it, then I get a shinier ls with some nice defaults.
If I'm in another environment then ls still works without any extra thought, and the muscle memory is the same, and I haven't lost anything.
If there's a tool which works very differently to the standard suite, then it really has to be pulling its weight before I consider using it.
When I got my first Unix account [1] I was in a Gnu emacs culture and used emacs from 1989 to 2005 or so. I made the decision to switch to vi for three reasons: (1) less clash with a culture where I mostly use GUI editors that use ^S for something very different than what emacs does, (2) vim doesn't put in continuation characters that break cut-and-paste, (3) often I would help somebody out with a busted machine where emacs wasn't installed, the package database was corrupted, etc and being able to count on an editor that is already installed to resolve any emergency is helpful.
[1] Not like the time one of my friends "wardialed" every number in my local calling area and posted the list to a BBS and I found that some of them could be logged into with "uucp/uucp" and the like. I think Bell security knew he rang everybody's phone in the area but decided to let billing handle the problem because his parents had measured service.
i learned ansible and i run 1 command and wait 10 minutes and configure new linux machine with all the stuff i want
I started a new job and spent maybe a day setting up the tools and dotfiles on my development machine in the cloud. I'm going to keep it throughout my employment so it's worth the investment. And I install most of the tools via nix package manager so I don't have to compile things or figure out how to install them on a particular Linux distribution. L
Learn Ansible or similar, and you you can be ~OS (OSX/Linux/even Windows) agnostic with relatively complex setups. I set mine up before Agentic systems were as good as they are now; but I assume it would be relatively effortless now.
IMO, it's worth spending some time to clean up your setup for smooth transition to new machines in the future.
I know well enough my way around vi, because although XEmacs was my editor during the 1990's when working on UNIX systems, when visiting customers there was a very high probability that they only had ed and vi installed on their server systems.
Many folks nowadays don't get how lucky they are, not having to do UNIX development on a time-sharing system, although cloud systems kind of replicate the experience.
Ed is the standard text editor.
Doing edlin as high-school typing exam was already enough, and ed wasn't much better, which was an opinion shared by our customers back then.
And not installed by default in many distros. FML.
> And not installed by default in many distros. FML.
ed (pronounced as distinct letters, /ˌiːˈdiː/)[1] is a line editor for Unix and Unix-like operating systems. It was one of the first parts of the Unix operating system that was developed, in August 1969.[2] It remains part of the POSIX and Open Group standards for Unix-based operating systems
so it is a bug in those distros.
This is how I feel as well. Spend some time "optimizing" my CLI with oh my zshell etc. when I was young.
Only to feel totally handicapped when logging in into a busybox environment.
I'm glad I learned how to use vi, grep, sed..
My only change to an environment is the keyboard layout. I learned Colemak when I was young. Still enjoying it every day.
Not a comment on these particular tools, but I keep non-standard utilities that I use in my ~/bin/ directory, and they go with me when I move to a different system. The tools mentioned here could be handled the same way, making the uphill a little less steep.
I have some of these tools, they are not "objectively superior". A lot of them make things prettier with colors, bargraphs, etc... It is nice on a well-configured terminal, not so much in a pipeline. Some of them are full TUIs, essentially graphical tools that run in a terminal rather than traditional command line tools.
Some of them are smart but sometimes I want dumb, for example, ripgrep respects gitignore, and often, I don't want that. Though in this case, there is an option to turn it off (-uuu). That's a common theme with these tools too, they are trying to be smart by default and you need option to make them dumb.
So no, these tools are not "objectively superior", they are generally more advanced, but it is not always what you need. They complement classic tools, but in no way replace them.
Never will I ever set up tools and home environment directly on the distro. Only in a rootfs that I can proot/toolbx/bwrap into. Not only I don't want to set up again on different computer, distro upgrade has nuked "fancy" tools enough times to be not worth it.
https://proot-me.github.io/
Wow, that is so cool. This looks a lot more approachable than other sandboxing tools.
Agreed, but some are nice enough that I'll make sure I get them installed where I can. 'ag' is my go to fast grep, and I get it installed on anything I use a lot.
How hard is it to set up your tooling?
I have a chef cookbook that sets up all the tools I like to have on my VMs. When I bootstrap a VM it includes all the stuff I want like fish shell and other things that aren’t standard. The chef cookbook also manages my SSH keys and settings.
For some people the "uphill battle" is the fun part
I indeed would not want to feel stranded with a bespoke toolkit. But I also don't think shying away from good tools is the answer. Generally I think using better tools is the way to go.
Often there are plenty of of paths open to getting a decent environment as you go:
Mostly, I rely on ansible scripts to install and configure the tools I use.
One fallback I haven't seen mentioned, that can get a lot of mileage from it: use sshfs to mount the target system locally. This allows you to use local tool & setup effectively against another machine!
Along those lines, Dvorak layouts are more efficent, but I use qwerty because it works pretty much everywhere (are small changes like AZERTY still a thing? Certainly our French office is an "international" layout, and generally the main pain internationally are "@" being in the wrong place, and \ not working -- for the latter you can use user@domain when logging into a windows machine, rather than domain\user)
I've been using Dvorak for 24 years. 99% of the time I'm using my own machines, so it's fine. For the other 1% I can hunt-and-peck QWERTY well enough.
I do it at least for ripgrep.
Fzf has saved me so much time over the years. It's so good.
Yes, good point. I use it all the time too. Plus fzf-lua in neovim which depends on it.
"I don't want to be a product of my environment. I want my environment to be a product of me."
so right.