Every time such a list is posted, it tends to generate a lot of debate, but I do think there is at least 2 tools that are really a good addition to any terminal :
`fd`: first I find that the argument semantic is way better than `find`, but that is more a bonus than a real killer feature. Now, it being much, much faster than `find` on most setup, I would consider a valuable feature. But the killer feature for me is the `-x` argument. It allows calling another command on the individual search result, which `find` can also do with `xargs` and co. But `fd` provide a very nice placeholder syntax[0], which remove the need to mess with `basename` and co. to parse the filename and make a new one, and it executes in parallel. For example, it makes converting a batch of image a fast and readable one line : `fd -e jpg -x cjxl {} {.}.jxl`
`rg` a.k.a `ripgrep` : Honestly it is just about the speed. It is so much faster than `grep` when searching through a directory, it opens up a lot of possibilities. Like, searching for `isLoading` on my frontend (~3444 files) is instant with rg (less than 0.10s) but takes a few minutes with grep.
But there is one other thing that I really like with `ripgrep` and that I think should be a feature of any "modern" CLI tool : It can format its output in JSON. Not that I am a big fan of JSON, but at least it is a well-defined exchange format. "Classic" CLI tool just output in a "human-readable" format which might just happen to be "machine-readable" if you mess with `awk` and `sed` enough. But it makes piping and scripting just that much more annoying and error & bug prone. Being able to output json, `jq` it and feed it to the next tool is so much better and feel like the missing chain of the terminal.
The big advantage of the CLI is that it is composable and scriptable by default. But it is missing a common exchange format to pass data, and this is what you have to wrangle with a lot of time when scripting. Having json, never mind all the gripes I have with this format, really join everything together.
Also, honorable mention for `zellij` which I find to be a much saner UX-wise alternative to `tmux`, and the `helix` text editor, which for me is neo-vim but with, again, a better UX (especially for beginner) and a lot more battery included feature while remaining faster (IMEX) than nvim with matching plugin for feature-parity.
EDIT: I would also add difftastic ( https://github.com/Wilfred/difftastic ) which is a syntax aware diff tool. I don't use it much, but it does makes some diff so so much easier to read.
[0] https://github.com/sharkdp/fd?tab=readme-ov-file#placeholder...
I briefly resisted the notion that fd and ripgrep were useful when a friend suggested them.
Then I tried them and it was such a night and day performance difference that they're now immediate installs on any new system I use.
> Like, searching for `isLoading` on my frontend (~3444 files) is instant with rg (less than 0.10s) but takes a few minutes with grep.
grep will try to search inside .git. If your project is Javascript, it might be searching inside node_modules, or .venv if Python. ripgrep ignores hidden files, .gitignore and .ignore. You could try using `git grep` instead. ripgrep will still be faster, but the difference won't be as dramatic.
While sometimes a plausible explanation for huge differences in search time, there are at least two other possible explanations.
First is parallelism. If you do `grep -r`, most greps (like GNU grep and BSD grep and the one found on macOS) will not use parallelism. ripgrep will. This could easily account for a large perceived difference in search time.
Second is the grep implementation. While GNU grep is generally considered to be quite fast on its own (especially in the POSIX locale), many other grep implementations are not. And indeed, they may be extraordinarily slow. For example, consider this comparison between FreeBSD grep (on macOS) and ripgrep (I clipped the `time` output for the `wc -l` command, which I just used to make the output smaller):
This is just a straight up comparison that removes things like parallelism and filtering completely. This is purely algorithms. And the difference is an order of magnitude.Any one of these explanations, on its own, can account for huge differences in perceived performance. It's not really possible to know which one (perhaps even all 3) is relevant from pithy descriptions of ripgrep being so much faster than other tools.
Version info for above commands:
No i specifically made sure to run it in a dir without a .git, node_modules, etc. It is just that slow
> But the killer feature for me is the `-x` argument. It allows calling another command on the individual search result, which `find` can also do with `xargs` and co. But `fd` provide a very nice placeholder syntax[0], which remove the need to mess with `basename` and co. to parse the filename and make a new one, and it executes in parallel. For example, it makes converting a batch of image a fast and readable one line : `fd -e jpg -x cjxl {} {.}.jxl`
That was inherited from find, it has "-exec". Even uses the same placeholder, {}, though I'm not sure about {.}
`find` only support `{}`, it does not support `{/}`, `{//}`, `{.}` etc, which is why you often need to do some parsing magic to replicate basic thing such has "the full path without the extension`, `only the filename without the extension` etc
I think GNU parallel has similar placeholders, but I do prefer to just use `fd`.
I think it does, and tbf, `fd` is bascially `find` + `parallel`, but I do find that it is nice that it is just one tool and I don't need GNU parrallel :)
Great tools like dasel are format agnostic on input and output.