I was hoping for a review from a server perspective. That's where Debian shines in my opinion. I feel like the desktop part is a secondary priority for them. That's not a criticism, there's no other distribution I would use in production if it where my choice. On the desktop though they are a bit too stable. Even if one uses testing or unstable the focus on long term versions is still there.

Long term usually equates to a bit stale/out of date with distributions that only release every few years. Appropriate for stuff you don't really care about.

That's why I use rolling release distributions on my Desktop. For Debian, people recommend Debian testing usually. And that's fine. Maybe they should just call it Debian rolling releases and rename stable to Debian LTS. I think it's more appropriate to how people actually use these things.

Manjaro is not without issues but I've had it on one of my laptops for the last four years and it's nice to have the latest driver updates, kernels, etc. working together. It also helps that the community is just focused on current versions of stuff and fixing minor integrations with released packages rather than working around issues in some long forgotten release with distribution specific patches, etc. You find relatively little of that in Arch (which underlies Manjaro).

For production servers, the server just needs to boot my docker containers and get out of the way. IMHO There's no need to support > 10K packages for god knows what there. Most of that stuff probably has no business being installed on a server. I'm actually leaning towards immutable distributions and servers for that reason. The business of manually fiddling with servers in a production environment is something I'm trying to avoid/do less off. They shouldn't need a package manager if they are properly immutable.

> On the desktop though they are a bit too stable.

You're obviously correct here. But perhaps there are users who prefer stable packages on the desktop too. Corporate users most likely (yes, there are such users too). It helps with their security strategy and a development environment similar to their server.

To be very honest, I think the stable security-oriented approach is better than that of a rapid update distro. You should probably use an overlay package manager like flatpak, mise (for dev tools) or even Nix/Guix for anything modern. Preferably something with minimal installs and good sandboxing features. Please let us know if anybody has better suggestions to offer.

I'm such a user. Been mostly running on debian/stable since the 90-ies. At work and privately. I cheated when I got a new computer in the beginning of August this year and installed Trixie a couple of weeks before release.

My reasoning is quite simple: I really don't need the latest versions of everything. Were computers useful two years ago? Yeah? OK then, then a computer is obviously useful today with software that is two years old. I'll get the new software eventually, with most of the kinks ironed out. And I've had time to read up on the changes before they just hit me in the face.

Sure, it was a bit painful with hardware support some twenty years ago or so, but I can barely remember the last time that was an issue.

For the very few select pieces of software where stable doesn't quite cut it there's backports, fasttrack and other side channels.

I prefer stable packages on my desktop and laptop, both for professional and for personal use. I hate the current Javascript/Python/Rust bleeding-edge, left-pad, if you haven updated to yesterday’s latest version which breaks compatibility with everything culture.

I like to build things which last. I like to craft a software system and then use it for decades, moving it from machine to machine and intentionally upgrading the components at my pace.

Same opinion. I tried Fedora and I really liked it. But the constant cache updating frustrated me quickly. I just want something that worked that I can update without doing more than running the command.

I use Debian Stable on my laptop and workstation. Most packages you don't need newer versions. I don't need the latest version of Gnome or Gedit or whatever.

I don't understand why people like the rigmarole of constantly updating their systems. The only things that come down the wire are security updates.

Installer newer software can be managed. I use the following strategy:

- For Discord / Slack / <something that needs to be the newest>. I can normally use Flatpak.

- Use a third party repo. For Brave, Node and some other things. I use their repository.

- Open source stuff. For smaller stuff that is easy to compile from source e.g. vim / neo-vim I just compile from source so I have the newest versions.

- Python Apps / NPM tooling. I install them in my local user directory.

- Docker is installed in rootless mode.

> On the desktop though they are a bit too stable.

>> You're obviously correct here.

It's neither obvious nor correct, the "stability vs. features" expected is completely subjective. I run Debian Stable on my desktop because I've almost never encountered needing newer versions of anything, and when I did I could usually jump to testing (i.e. the upcoming release) rather than unstable, and even then the next release usually wasn't that far away, so it was still very stable.

As other commenters have pointed out, you can run Debian Sid (unstable), but I'll also agree that if that is what you want long-term then maybe running something like Arch makes more sense anyway.

I'm one of those users, but only because I don't need the be on the bleeding edge.

The only problem I had on Debian 11 desktop was related to the new openssh libraries. I could not install the latest nodes and rubies because 11 had older libraries. However there are workarounds related to providing some environment variables (from memory: some legacy_providers_*) so after a little googling I made them work on my dev machine (and on some old server from a customer of mine.) I'm installing Debian 13 in these days so no more workarounds, for a few years.

Everything else worked fine. I don't install much on this machine: no flatpacks, no appimages, no snaps (I left Ubuntu because of them.) Only debs and docker images. I install languages through their language manager, never through the OS: I could have only one version of them, which is useless. Same about databases. There are hardly two projects on the same language and db version. I could be using LibreOffice and GIMP from 20 years ago: they already had all the features I need.

I use incus for my dev needs. But for work computers, I’ve mostly needed one version of everything.

In my experience, corporate users have moved on to using containers(or VMs) for their development environments.

It's a tricky thing to solve. One the one hand, you don't want your system to stop working due to an update but also want to keep the software you use updated, both in terms of security and functionality.

Mark Shuttleworth talked about this many years ago before snaps were introduced as a solution to this. The idea at the time was that a rolling release distro is too much of a hassle to maintain and even the 6-month cycle was getting to be too much. So he talked about having a stable core with a long release cycle and rolling releases for software that need to be frequently updated, both desktop and server software. The idea was great but the details of the execution left a bitter taste for many users.

Atomic distribution can be a nice solution for that. But the current portal ecosystem is a bit poor for integration between flatpack.

Indeed, with the tmpfs move (tmp in RAM) however it sounds like they have more Desktops in mind.

You don't want to use RAM for tmp files for which you probably can't do capacity planning, and you don't to enable swap on server either.

I honestly don't understand that change, as most desktops are RAM limited as well, especially as Debian is regularly used for older machines, which aren't supported by Windows 11 anymore.

Is it common for scripts to download multiple gigabytes to /tmp?

I sometimes manually changed the /tmp to be in memory, or used /dev/shm which by default is in memory. Did not run into any problems just yet, but then again it's just a home server.

Not sure about scripts, but I download and store everything I know I'll only need until the next reboot in /tmp and naturally that tends to be quite a lot from time to time. That worked fine for decades, so I'm not sure what's the benefit if storing the contents of /tmp in memory instead.

Now you can use /var/tmp I think.