For me the interesting alternate reality is where CPUs got stuck in the 200-400mhz range for speed, but somehow continued to become more efficient.
It’s kind of the ideal combination in some ways. It’s fast enough to competently run a nice desktop GUI, but not so fast that you can get overly fancy with it. Eventually you’d end up OSes that look like highly refined versions of System 7.6/Mac OS 8 or Windows 2000, which sounds lovely.
I loved System 7 for its simplicity yet all of the potential it had for individual developers.
Hypercard was absolutely dope as an entry-level programming environment.
The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization thanks to its extension and control panel based architecture. Sure, it was a security nightmare, but there was practically nothing that couldn’t be achieved by installing some combination of third party extensions.
Even modern desktop Linux pales in comparison because although it’s technically possible to change anything imaginable about it, to do a lot of things that extensions did you’re looking at at minimum writing your own DE/compositor/etc and at worst needing to tweak a whole stack of layers or wade through kernel code. Not really general user accessible.
Because extensions were capable of changing anything imaginable and often did so with tiny-niche tweaks and all targeted the same system, any moderately technically capable person could stack extensions (or conversely, disable system-provided ones which implemented a lot of stock functionality) and have a hyper-personalized system without ever writing a line of code or opening a terminal. It was beautiful, even if it was unstable.
I’m not too nostalgic for an OS that only had cooperative scheduling. I don’t miss the days of Conflict Catcher, or having to order my extensions correctly. Illegal instruction? Program accessed a dangling pointer? Bomb message held up your own computer and you had to restart (unless you had a non-stock debugger attached and can run ExitToShell, but no promises there.)
It had major flaws for sure, but also some excellent concepts that I wish could've found a way to survive through to the modern day. Modern operating systems may be stable and secure, but they're also far more complex, inflexible, generic, and inaccessible and don't empower users to anywhere near the extent they could.
Given enough power and space efficiency you would start putting multiple cpus together for specialized tasks. Distributed computing could have looked differently
This is more or less what we have now. Even a very pedestrian laptop has 8 cores. If 10 years ago you wanted to develop software for today’s laptop, you’d get a 32-gigabyte 8-core machine with a high-end GPU. And a very fast RAID system to get close to an NVMe drive.
Computers have been “fast enough” for a very long time now. I recently retired a Mac not because it was too slow but because the OS is no longer getting security patches. While their CPUs haven’t gotten twice as fast for single-threaded code every couple years, cores have become more numerous and extracting performance requires writing code that distributes functionality well across increasingly larger core pools.
This was the Amiga. Custom coprpcessors for sound, video, etc.
Commodore 64 and Ataris had intelligent peripherals. Commodore’s drive knew about the filesystem and could stream the contents of a file to the computer without the computer ever becoming aware of where the files were on the disk. They also could copy data from one disk to another without the computer being involved.
Mainframes are also like that - while a PDP-11 would be interrupted every time a user at a terminal pressed a key, IBM systems offloaded that to the terminals, that kept one or more screens in memory, and sent the data to another computer, a terminal controller, that would, then, and only then, disturb the all important mainframe with the mundane needs or its users.
There's something to this. The 200-400MHz era was roughly where hardware capability and software ambition were in balance — the OS did what you asked, no more.
What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free. An alternate timeline where CPUs got efficient but RAM stayed expensive would be fascinating — you'd probably see something like Plan 9's philosophy win out, with tiny focused processes communicating over clean interfaces instead of monolithic apps loading entire browser engines to show a chat window.
The irony is that embedded and mobile development partially lives in that world. The best iOS and Android apps feel exactly like your description — refined, responsive, deliberate. The constraint forces good design.
Lots of good practices! I remember how aggressively iPhoneOS would kill your application when you got close to being out of physical memory, or how you had to quickly serialize state when the user switched apps (no background execution, after all!) And, or better or for worse, it was native code because you couldn’t and still can’t get a “good enough” JITing language.