Had a lot of fun reading that.
I'm a bit of a hoarder when it comes to technology, truth be told there's a certain rose tinted nostalgia that I get from thinking about early 00's technology.
It was still the era where UI's felt immediate and snappy- that anything related to actual computation or internet was jank and slow, but it had a whiff of a hopeful future about it. Every PC upgrade made things more snappy back then... Now I dread upgrades.
Hey ho.
It's endearing to know that one more bit of early 00's technology has been given a new lease on life. Would be cool to write some native software for it!
This painfully speaks to me, though on the hoarding front I was convinced to get rid of certain old stuff and now I honestly regret giving up my old Bondi Blue PM G3 400. But on a more recent note I do still have an original Mac Pro and recently had reason to set it up again with 10.6 Server.
And firing that up and getting back in really hit me like a ton of bricks, yeah there are all sorts of capabilities available now on the newest systems, but the UX there was just so much more pleasant in a number of tiny ways.
You've also said it well, in fact I'd go further and say there was far more then a mere "whiff" of a hopeful future. I looked forward very much to each new release, with concrete useful improvements and the promise of more. In retrospect that was actually the pinnacle of Apple's (genuinely quite decent and promising) server efforts, of a potential alternate world line where they positioned themselves as a strong solution for running things without internet dependencies and subscriptions required. The MP continued to iterate along reliably and nicely until 2010, at which point it died off as a focus at Apple with absolutely agonizing slowness, and with it the dream of a continued upward progression in market support for value oriented hardware capabilities beyond what Apple themselves deigned to offer.
Sigh.
I’m sure that snappiness is possible in modern software, but nobody really seems to pursue it outside of hyper-minimal Linux desktops which aren’t everybody’s cup of tea.
Not that GNOME, KDE, XFCE, etc on modern machines are bad exactly, but you definitely feel a considerable amount of extra latency everywhere vs. e.g. a 500Mhz PowerBook G3 running OS 9 or OS X 10.2-10.4, which drags the experience down. I’m sure some of degree of latency increase is unavoidable thanks to all of the layers involved in the Linux stack as well as compositing and all that, but I’d bet that there’s a considerable amount that could be optimized away if there were a concerted effort to do so.
> you definitely feel a considerable amount of extra latency everywhere vs. e.g. a 500Mhz PowerBook G3 running OS 9 or OS X 10.2-10.4
Odd thing...
While I agree regarding the snappiness of older OSes, the Mac was for me always a bit of an odd exception.
I started on Macs in the 680x0 era and Mac System 6, and I worked on them through 7.x, 8.x, 9.x and into OS X.
For me, no PowerPC edition of either Classic or OS X ever felt as responsive as Classic on a 680x0 Mac. I narrowly missed out on a Quadra 840 on Freecycle over 15 years ago and still regret it -- that was the fastest 68040 Mac ever made.
NeXTstep was of course originally built on and shipped on 68030 -- it's a CISC native OS. PowerPC Classic was always mostly running emulated 680x0 code.
I read analyses of Mach API calls that explained that calls on RISC were less efficient in register usage or something.
But then, Intel Macs came along. Mac OS X returned to x86 from PowerPC. And suddenly Mac OS X felt snappy again in a way it never did for me on PowerPC.
As an old-time Motorola user I was conflicted about Intel Macs. Macs weren't meant to be PCs. I didn't want Windows on a Mac. But the feeling of using 10.4 on Intel converted me: it felt snappy and responsive in a way Windows NT never did on Intel.
(NT was built on RISC and ported to Intel, the reverse of NeXTstep.)
this matches my experience actually.
I thought I didn't like Apple computers, but in hindsight I remember feeling that the eMac/G5's that I was using at school were clunky and slow compared to the contemporary Windows XP machines.
This was 2005- so XP on period correct hardware was extremely lean in comparison.
I think the latency was a pretty substantial reason for this in retrospect. I did not have nearly the same experience in 2012 when I bought my first Macbook Pro. (which I purchased because it was a UNIX that could run Microsoft Office and our VPN software... and I've been a MacOS user on/off ever since).
I never used a PowerPC Mac, I bought the first new computer they unveiled after the iPhone. But I did go through the Apple Silicon transition, and let me tell you computing is great!
That M1 Macbook Air killed any sort of desire to get an iPad or any other computer for that matter. I'm looking forward to upgrading this year or next year, but somehow even that feels superfluous. Except for RAM. Damn low RAM.
I've actually been pleasantly surprised by how well most Linux stacks work in resource constrained environments.
I have a first-generation Framework - the ones with the shitty Intel CPUs that don't support proper S3 sleep, only the awful "modern standby" - and I often throttle the clock to 400 MHz to save battery. GNOME's performance doesn't degrade at all; the only place it feels "un-snappy" is when starting heavier apps.
I think I may be misremembering, but I got S3 working quite well in both Linux and Windows. This comment from Nirav Patel suggests it should work quite well (although unsupported by Intel as with all other modern Intel CPUs): https://news.ycombinator.com/item?id=31435132
Right now, I have enough small but annoying hardware issues with my Framework laptop that I'm not bothering to improve anything for myself, either software or hardware related. I currently use it only as a desktop, and don't have anything but Windows installed at the moment.
In the past I've always recommended Framework, and I still feel I can't just leave my negative tone above sitting without further comment.
<Start of rant>
All the issues I've had would be trivial fixes, except that I'm not in a country that Framework ships to, and they can't send to anywhere outside that list, or allow freight-forwarding. Even for simple, inexpensive parts. I don't know the reason for this, but I do genuinely trust that it is because of a real problem for them, not just a lack of effort.
Excepting that, I've always had very high-quality, prompt responses. I'm currently waiting until Framework expand their supported countries, I move country again, or I can arrange to send parts to someone else I know who can forward parts on to me.
From soon after purchase, I was silently enduring a rattling fan, and the 1st gen 1TB expansion card overheating issue. Recently, the 1st-gen backup battery design issue has suddenly made itself known to me, and now I'm more pessimistic. It seems all it takes is moving to a non-supported country to turn what was designed to be a repairable laptop, into something less convenient to fix than my previous preferred choice (any of the more repairable, common enterprise laptops).
For anyone living in a supported country, I would still suggest a Framework laptop, but with slightly more hesitation than before. <End of rant>
Linux desktops certainly handle low-resource environments well, particularly compared to Windows, but even so it has some omnipresent elevated latency relative to late 90s/early 00s commercial operating systems, which can be felt even on powerful hardware.
It’s hardly a dealbreaker and not even really a problem (which is probably why it’s still there) but making software instantaneous does wonders for improving how it feels to use.
Slow CPUs and GPUs work much better on Linux than on Windows, but when the problem is a shortage of RAM, Windows will hold out much longer before grinding to a halt.
In an era of soldered-on RAM, this is becoming a rather annoying problem. Unfortunately, Linux doesn't offer the same APIs Windows does to take preventative action before running out of RAM. Windows' auto-growing page file also makes for a less crashy experience than Linux' static swap partitions (or constant-size pagefile). Plus, Windows comes with memory compression (zram/zswap) out of the box, configured to use both memory compression and disk swap to catch these situations.
I wish Windows wasn't such a slog on the CPU and GPU because making Linux work in low-memory situations is an absolute pain.
> making software instantaneous does wonders for improving how it feels to use.
When you phrase it like that, I'm actually more surprised Linux suffers from the extra latency. When most software is written to scratch an itch and optimization beyond "works on my machine" is for fun, you'd expect latency to evaporate over time
It’s beyond my realm of expertise so I can’t speak confidently, but my hunch is that many of the responsiveness papercuts are cross-domain in nature, which makes them more difficult to track down and fix (not to mention makes who’s responsible for fixing them more murky).
Another big chunk likely comes down to the tradeoffs all the big DEs have made in favor of making development easier or improving DX. This is understandable but at the same time it would be nice if at least one took a hardline stance towards commiting performance reductions and placed responsiveness as a chief concern, much as the operating systems of yesteryear did.
Gnome and KDE plasma or whatever are horrible steps back in my experience. Laggy or slow loading of content in apps such as their app catalogs or hitching and lockups feel gross on a capable modern system even with 24gb of ram. Windows 11 isn't much better honestly where I relate to the video showing Win2k vs Win11 and apps take forever to load now.
Sure they are pretty looking but so was stuff like Aero which felt faster on worse hardware so what is going on? I feel like around the mid 2000s we went down the path of putting heavy, slow web-like user interfaces into the OS and brought everything down with it in favor of the scalability of DPI and easier (?) development. It's not really the OS either it's the apps. We've been making trade offs at the cost of usability and speed.
Some of the lighter DEs have problems too. Just about all of them take an extra beat or two to display their start menu equivalents for example, which feels like it’s probably a solvable problem considering that systems of yore could do the same with almost no delay on very limited hardware.
I used my 2003ish Power Mac G4 MDD some time around 2014 before giving it to a friend and I noticed how quickly everything responded. Kinda annoyed me how we've regressed in that regard.
> It was still the era where UI's felt immediate and snappy
I'll never forget my first smartphone. Whenever somebody called, it took so long for the screen to turn on, that the other person would often hang up before I could answer. I'd just sit there with a ringing phone in my hand, waiting for its screen to turn on.
I'd really like to know what phone you had but my first attempt to buy one was a Windows based phone. I remember asking them to let me check it out so they grabbed a new one and turned it on...5 minutes later it was still booting.
I deeply miss my Palm m100 and it's snappy interface and ability to run all week on some AAA batteries.
It was Samsung Galaxy Mini 2. I think it was the worst phone that was yet vaguely usable.