curl|sh and iwr|iex chills my spine, no one should recommend these methods of installation in 2025. I'm against closed computers but I'm also against reckless install. Even without the security concerns these way of installation tends to put files in a whole random places making it hard to manage and cleanup.

Installing an out-of-distro deb/rpm/msi/dmg/etc package is just as unsafe as curl|sh. Or even unsafer, as packages tend to require root/admin.

A package is at least a signable, checksummable artefact. The curl | sh thing could have been anything and after running it you have no record of what it was you did.

There have also been PoCs on serving malicious content only when piped to sh rather than saved to file.

If you want to execute shell code from the internet, at the very least store it in a file first and store that file somewhere persistent before executing it. It will make forensics easier

I've seen deb files that do everything in a post-install script. There's no way to identify this before downloading them (they came from a hosted repo, were signed). Some of these download files based on an internal manifest, others just run `pip install`, and others download a list of other files that need to be downloaded.

There's no guarantee packages are actually making use of package features in any reasonable way, other than convention.

If you're going to run code without inspecting it though, the methods are similar. One case has https, the other a signature (which you're trusting due to obtaining it over https). You can't inspect it reliably only after getting hypothetically compromised.

Security and auditability is not the core problem, it's versioning and uninstalling. https://docs.sweeting.me/s/against-curl-sh

Also file conflicts. Installing an RPM/ALPM/APK should warn you before it clobbers existing files. But for a one-off install script, all it takes is a missing environment variable or an extra space (`mv /etc/$INSTAALCONF /tmp`, `chown -R root /$MY_DATA_PATFH`), and suddenly you can't log on.

Of course unpredictability itself is also a security problem. I'm not even supposed to run partial updates that at least come from the same repository. I ain't gonna shovel random shell scripts into the mix and hope for the best.

Uninstalling can be a problem.

Versioning OTOH is often more problematic with distro package managers that can't support multiple versions of the same package.

Also inability to do user install is a big problem with distro managers.

That is still checked for its signature, the only thing you bypass is the automatic download over HTTP and dependency resolution by default.

While I do share the sentiment, I firmly believe that for opensource, no one should require the author to distribute their software, or even ask them to provide os-specific installation methods. They wrote it for free, use it or don't. They provide a handy install script - don't like it? sure, grab the source and build it yourself. Oops, you don't know what the software does? Gotta read every line of it, right?

Maybe if you trust the software, then trusting the install script isn't that big of a stretch?

For small project open source with a CLI audience, why bother with an install script at all and not just provide tarballs/ZIP files and assume that the CLI audience is smart enough to untarball/unzip it to somewhere on their PATH?

Also, many of the "distribution" tools like brew, scoop, winget, and more are just "PR a YAML file with your zip file URL, name of your EXE to add to a PATH, and a checksum hash of the zip to this git repository". We're about at a minimum effort needed to generate a "distribution" point in software history, so seems interesting shell scripts to install things seem to have picked up instead.

> Maybe if you trust the software, then trusting the install script isn't that big of a stretch?

The software is not written in a scripting language where forgetting quote marks regularly causes silent `rm -rf /` incidents. And even then, I probably don't explicitly point the software at my system root/home and tell it to go wild.

Part of writing software involves writing a way to deploy that software to a computer. Piping a web URL to a bash interpreter is not good enough. if that's the best installer you can do the rest of your code is probably trash.

It's not the best installer they can come up with. It's just the most OS/distro-agnostic one-step installer they can come up with.

It's so not, though. Half the time if you read one of those install scripts it's just an `if`-chain for a small number of platforms the developer has tested. And breaks if you use a different distro/version.

uv is pretty self-contained; there aren't a lot of ways a weird linux distro could break it it or its installer, aside from not providing any of the three user-owned paths it tries to install uv into (it doesn't try to do anything with elevated privileges or install for anyone other than the current user). Expecting $HOME and your own shell profile to be writable just isn't something that's going to break very often.

Looking at the install script or at a release page (eg. https://github.com/astral-sh/uv/releases/tag/0.9.6 ) shows they have pretty broad hardware support in their pre-compiled binaries. The most plausible route to being disappointed by the versatility of this install script is probably if you're running an OS that's not Linux, macOS, or Windows—but then, the README is pretty clear about enumerating those three as the supported operating systems.

What would you suggested as a recommend method of installation in 2025?

You can `pip install uv` or manually download and extract the right uv-*.tar.gz file from github: https://github.com/astral-sh/uv/releases

That iwr|iex example is especially egregious because it hardcodes the PowerShell <7.0 EXE name to include `-ExecutionPolicy Bypass`. So it'll fail on Linux or macOS, but more importantly iwr|iex is already an execution bypass, so including a second one seems a red flag to me. (What else is it downloading?)

Also, most reasonable developers should already be running with the ExecutionPolicy RemoteSigned, it would be nice if code signing these install script was a little more common, too. (There was even a proposal for icm [Invoke-Command] to take signed script URLs directly for a much safer alternative code-golfed version of iwr|iex. Maybe that proposal should be picked back up.)

Maybe there will be a .deb one day

That doesn't fix the core issue. You can put anything inside a .deb file, even preinstall script can send your ~/.aws/credentials to China. The core concern is getting a package that's verified by a volunteer human to not contain anything malicious, and then getting that package into Debian repository or equivalent.

How is it even different from running a pre compiled binary

can't you just do curl|more and then view what it's going to do? Then, once you're convinced, go back to curl|sh.

/just guessing, haven't tried it

A malicious server could detect whether the user is actually running "curl | sh" instead of just "curl" and only serve a malicious shell script when the code is executed blindly. See this thread for reference: https://news.ycombinator.com/item?id=17636032

well you still have to execute the shell script at some point. You could do curl > install.sh, open it up to inspect, and then run the install script which would still trigger the callback to the server mentioned in the link you posted. I guess it's really up to the user to decide what programs to run and not run.

for real. You want to pipe a random URL to my bash interpreter to install?

no. thats how you get malware. Make a package. Add it to a distro. then we will talk.