The Megahertz Wars were an exciting time. Going from 75 MHz to 200 MHz meant that everything (CPU limited) ran 2x as fast (or better with architectural improvements).
Nothing since has packed nearly the impact with the exception of going from spinning disks to SSDs.
In my experience, SSDs had a bigger impact. Thanks to Wirth's Law (https://en.wikipedia.org/wiki/Wirth%27s_law) the steady across-the-board increase in processing power didn't equate to programs running much faster, e.g. Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.
SSDs provided a huge bump in performance to each individual computer, but trickled their way into market saturation over a generation or two of computers, so you'd be effectively running the same software but in a much more responsive environment.
Anytime you upgraded from a 4 year old computer to a new one back then - from 16Mhz to 90Mhz, or 75Mhz to 333Mhz, or 333Mhz to 1Ghz, or whatever - it was immediate, it was visceral.
SSDs booted faster and launched programs faster and were a very nice change, but they weren't that same sort of night-and-day 80s/90s era change.
The software, in those days, was similarly making much bigger leaps every few years. 256 colors to millions, resolution, capabilities (real time spellcheck! a miracle at the time.) A chat app isn't a great comparison. Games are the most extreme example - Sim City to Sim City 2000; Doom to Quake; Unreal Tournament to Battlefield 1942 - but consider also a 1995 web browser vs a 1999 one.
For me, at 52, I recall the SSD transformation to be near miraculous. I never once felt that way about a CPU upgrade until getting an M1. I went from a cyrix 5x86 133 (which was effectively a fast 486) to a pentium II 266 and it just wasn't that impressive.
The drag down of swapping became almost a non-issue with the SSD changeover.
I suppose going from a //e to a IIgs was that kind of leap but that was more about the whole computer than a cpu.
Now I have to say, swapping to an SSD on my windows machines at work was far less impressive than going to SSD with my macs. I sort of wrote that off as all the anivirus crap that was running. It was very disappointing compared to the transformation on mac. On my macs it was like I suddenly heard the hallelujah chorus when I powered on.
I went 386 DX 33 to a Pentium 75, which wasn't a wild amount of time. I'd argue that's way bigger than when I got an SSD (but I agree SSD was a huge improvement).
I agree. There were only 2 game changing upgrades for me. One was hard disk to SSD. The other was x86 laptop to M1.
You really didn't feel Pentium 4 to Core 2 Duo was a 'game changer'?
Moving from floppy disk to hard disk was pretty big for me. :)
Hey, moving from cassette tape to floppy was also pretty awesome - random access speed demon!
That's my point, the software was getting bloated at least as fast as the CPUs were getting faster, so you had to upgrade to a new CPU every few years to run the latest software. With SSDs, there was a huge overlap in CPU speeds that may or may not have an SSD, so upgrading to one meant a huge performance boost, within the same set of runnable software.
Also, going from Sim City to Sim City 2000 was pre-bloat. Over the course of five years, the new version was significantly better than the original, but they both target the same 486 processor generation, which was brand new when the original SimCity was released, but rather old by the time SimCity 2000 was released. Another five years later, Sim City 3000 added minimal functionality, but required not just a Pentium processor, but a fast one.
I guess what I'm getting at is that a faster CPU means programs released after it will run better, but faster storage means that all programs, old and new, will run better.
I wouldn't call that bloat; certainly we've been complaining about software bloat as long as I've been into computers, but at that time, software was simply pushing the capabilities of the hardware, and often running into walls.
These days, we value developer productivity over performance optimization, so we have stuff like Electron apps. The reason behind it is that CPUs (and RAM quantity, for the most part) are so far ahead of regular desktop applications that it doesn't matter. In the 80s and 90s, the hardware could barely keep up with decently-optimized software that wanted to do anything interesting.
> That's my point, the software was getting bloated at least as fast as the CPUs were getting faster
I think there's a difference between bloat and actually useful features or performance.
For example, I started making music with computers in the early 90s. They were only powerful enough to control external equipment like synthesizers.
Nowadays, I can do everything I could do with all that equipment on an iPad! I would not call that bloat.
On the other hand, comparing MS Teams to say ICQ, yeah, a lot of that is bloat.
> in the early 90s. They were only powerful enough to control external equipment like synthesizers.
Tell that to ScreamTracker!
In case anyone's wondering:
https://youtu.be/roBkg-iPrbw
Screamtracker was sampling. Great for the days and much more accesible for the teenager I was than buying and controlling synths but that was not exactly same. More a competition to the early akai MPCs.
And we were mostly ripping those samples from records on cassettes and CDs, or other mods.
Well now that you mention that, my very first steps actually were with Soundmonitor on a C64, one of the OG trackers probably (even though not called tracker yet IIRC). I kind of forgot about that, as that was still very amateurish (I mean what I made with it, not the software).
https://www.c64-wiki.de/images/f/f1/rockmon3.png
Or also at https://www.youtube.com/watch?v=roBkg-iPrbw&t=400s in the video already linked below. And yes, I had to type in that listing.
There is definitely bloat. A few months ago I was messing about with making a QWERTY piano in a web page, and it was utterly unplayable due to the bloat-induced latency in between the fingers and the ears.
> SSDs booted faster and launched programs faster and were a very nice change, but they weren't that same sort of night-and-day 80s/90s era change.
For me they were.
I still remember the first PC I put together for someone with a SSD.
I had a quite beefy machine at the time and it would take 30 seconds or more to boot Windows, and around 45s to fully load Photoshop.
Built this machine someone with entirely low-end (think like "i3" not "Celeron") components, but it was more than enough for what they wanted it for. It would hit the desktop in around 10 seconds, and photoshop was ready to go in about 2 seconds.
(Or thereabouts--I did time it, but I'm remembering numbers from like a decade and a half ago.)
For a _lot_ of operations, the SSD made an order of magnitude difference. Blew my mind at the time.
SSDs came out after CPUs started to slow down on doubling (single threaded) performance every 12-18mo or so.
So it was the only way to get that visceral improvement in user experience like CPU and platform upgrades were in the mid 90's to very early 00's.
The experience of just slapping a new SSD in a 3 year old machine was similar to a different generation of computer nerds.
Nothing could really match the night and day difference of an entire machine being double to triple the performance in a single upgrade though. Not even the upgrade from spinning disks to SSD. You'd go from a game being unplayable on your old PC to it being smooth as butter overnight. Not these 20% incremental improvements. Sure, load times didn't get too much better - but those started to matter more when the CPU upgrades were no longer a defining experience.
Sure, but what about once Photoshop was open? Aka where you spend most of your day after you start up your stuff?
Would you take the SSD and a 500Mhz processor or a 2Ghz dual-core with a 7200k or 10000k HD? "Some operations are faster" vs "every single thing is wildly faster" of the every-few-years quadrupling+ of CPU perf, memory amounts, disk space, etc.
(45sec to load Photoshop also isn't tracking with my memory, though 30s-1min boot certainly is, but I'm not invested enough to go try to dig up my G4 PowerBook and test it out... :) )
C64 1982 Amiga 1985
Never witnessed anything before or after with that jump in specs
Nah I agree with him. Spinning disks were always a huge bottleneck (remember how long MS Word took to open?) and SSD's basically fixed that overnight. The CPU advancements were big, but software had a chance to "catch up" (i.e. get less efficient) because they it was a gradual change. That didn't really happen with SSDs because the change was so sudden and big.
I'd say software never really "caught up" to the general slowness that we had to endure in the HDD era either. Even my 14 year old desktop starts Word in a few seconds compared to upwards of 60s in the 90s.
The closest I've seen is the shitty low end Samsung Android tablet we got for our kids. It's soooo slow and laggy. I suspect it's the storage. And that was actually and upgrade over the Amazon Fire tablet we used to have which was so slow it was literally unusable. Again I suspect slow storage is the culprit.
> Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.
The only thing more impressive that hardware engineers' delivering continuous massive performance improvements for the past several decades is software engineers' ability to completely erase that with more and more bloated programs to do essentially the same thing.
You joke, but it really is more work. Iv'e developed software in languages from assembly language to JavaScript, and for any given functionality it's been easier to write it in RISC assembly language running directly than to get something working reliably in JavaScript running on a framework in an interpreter in a VM in a web browser, where it's impossible to reliably know what a call is going to do, because everything is undocumented and untested.
One of the co-signers of the Agile Manifesto had previously stated that "The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer." (https://en.wikipedia.org/w/index.php?title=Ward_Cunningham#L...) I'm convinced that the Agile Manifesto was an attempt to make an internet post of the most-wrong way to manage a software projects, in hopes someone would correct it with the right answer, but instead it was adopted as-is.
Even with older lower level languages like C and COBOL '02 it's easier to do simple things like find a file, read the file, and draw the file on the screen as a raster image using a resizable canvas than it is to write the JavaScript to do the same thing.
The mangling of JavaScript to fit through every hole seems to be the biggest mistake made in modern programming, and I'm not sure what even keeps it going aside from momentum. At first it regained ground because Flash was going EOL, but now?
What makes Agile the most-wrong way to manage, in your opinion? I'm curious.
What’s the most complex thing you wrote in RISC assembly?
> In my experience, SSDs had a bigger impact.
When SSDs became mainstream, yes, I agree they had a bigger impact than any CPU speed increases at that particular time.
But back in the double-digit MHz days of CPU speeds, upgrading your CPU was king when it came to better performance, and I'd argue that effect was more pronounced than the the HDD to SSD transition was. It's hard to convey what huge jumps CPUs were making during that time period, and how big a difference it made.
I also remember a time, somewhere in the middle of that, when adding more RAM could be a bigger boost than a CPU upgrade. But back in the 80s and 90s (and prior, but I have no personal experience with that), there was only so much RAM you could add, and the CPU was still often what was holding you back.
But CPUs just haven't been the bottleneck for most home user workloads for a long time now. These days when I buy a new laptop, I certainly want the best CPU I can get, but I'm more concerned about how much RAM I can put in it, and the iGPU's specs. (SSDs are a given, so I don't need to think much about it.)
> Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.
I feel this. Humanity has peaked.
Every time Discord updates (which is often) I'm like "cool, slightly more code to run on the same hardware..."
Agree 100%. the compute was always bottlenecked by insanely high i/o latency. SSDs opened up fast computers like no processor ever did.
Eh. In the 1980s and 1990s, the capabilities of the software you could run on your new computer were changing dramatically every two years or so. Completely new types of computer games and productivity software, vastly improved audio and video, more and more real-time functionality.
Nowadays, you really don't get these magical moments when you upgrade, not on the device itself. The upgrade from Windows 10 to Windows 11 was basically just more ads. Games released today look about as good as games released 5-10 years ago. The music-making or photo-editing program you installed back then is still good. Your email works the same as before. In fact, I'm not sure I have a single program on my desktop that feels more capable or more responsive than it did in 2016.
There's some magic with AI, but that's all in the cloud.
I mean, HDD were much faster than floppy disks. Which were in turn much faster than tape cassettes. And so on...
This is silly. That's like saying that machines haven't gotten any better because a helicopter doesn't eat any less hay than a horse did.
I don't follow your analogy. Can you elaborate?
Debian Sarge, Kopete with KDE3: 256M of RAM, AMD Athlon 2000.
Windows 11, Discord: 4GB are not enough to run it well.
FYI, Kopete allowed inline LaTeX, Youtube videos (low res, ok, 480p maybe, but it worked), emoticos, animations, videoconference, themes, maybe basic HTML tags and whatnot. And it ran fast.
> Nothing since has packed nearly the impact with the exception of going from spinning disks to SSDs.
"Bananas" core-counts gave me the same experience. Some year ago I moved to Ryzen Threadripper and experienced similar "Wow, compiling this project is now 4x faster" or "processing this TBs of data is now 8x faster", but of course it's very specific to specific workloads where concurrency and parallism is thought of from the ground up, not a general 2x speed up in everything.
> The Megahertz Wars were an exciting time.
About a week ago, completely out of the blue, YouTube recommended this old gem to me: https://www.youtube.com/watch?v=z0jQZxH7NgM
A Pentium 4, overclocked to 5GHz with liquid nitrogen cooling.
Watching this was such an amazing throwback. I remember clearly the last time I saw it, which was when an excited friend showed it to me on a PC at our schools library. A year or so before YouTube even existed.
By 2005, my Pentium 4 Prescott at home had some 3.6GHz without overclocking, 4GHz models for the consumer market were already announced (but plagued by delays), but surely 10GHz was "just a few more years away".
IIRC, part of the GHz problem is that very long pipelines like that of the Pentium 4 tend to show increasing benefits at higher clocks. If you can keep the pipeline full then the system reaps the benefits. Sort of like a drag racer - goes very fast in a straight line but terrible on corners.
But with longer pipelines comes larger penalties when the pipeline needs to be flushed, so the P4 eventually hit a wall and Intel returned to the late Pentium 3 Tualatin core, refining it into the Pentium M which later evolved into the first Core CPUs.
only just last year did someone goose a PC cpu to 9.13ghz
https://www.tomshardware.com/pc-components/cpus/core-i9-1490...
I still remember my first CPU with a heatsink. It seemed like a temporary dumb hack.
Well it kinda was! seeing how power efficient iPhone chips are despite hovering the top of single core benchmarks.
I had the same inclination back in the 90s when I upgraded my Cyrix 486 SLC2 50MHz without a heat sink (which seems like a no-no in retrospect) to Cyrix MediaGX 133MHz. The stocker fan was immediately noticeable. I thought I had done something wrong.
Upgrading and Repairing PCs 4th edition even says directly, that some shady resellers will put a heatsink on a chip that they're running beyond spec, but that Intel designs all their processors to run at rated speed without one.
I had a PC with an old PII or PIII cartridge.
The cpu and heatsink was fully integrated into what looked like a NES cart, with an integrated fan and everything. It was not really possible to separate the cpu and the heatsink as the locking mechanism to keep the cart in place on the motherboard interfaced with the heatsink assembly.
So I'm a little dubious of that no-heatsink claim.
I've never seen a Xeon without a heat sink, I don't believe they are designed to run without one.
Indeed, even the oldest, slowest Xeons shipped in SECC cartridges with integrated heatsinks.
But that was several years after the book cited by the GP was published (1994, shortly after the release of the original Pentium).
The first Xeon looks to be released 1998, so sounds about right
SSDs were such a revolution though, and a really rewarding upgrade. I'd fit SSDs to friend and family computers as an upgrade.
Getting my first SSD was absolutely the best computer upgrade I've ever bought. I didn't even realise how annoying load times were because I was so used to them and coming from C64s and Amigas even spinning rust seemed fairly quick.
It took a long time before I felt a need to improve my PC's performance again after that.
There were quite a few mind blowing upgrades back in the day. The first sound card instead of PC beeper was one of my most memorable moments.
I remember loading up Doom, plugging my shitty earplugs that had a barely long enough cable and hearing the “real” shotgun sound for the first time. Oo-wee
I once had a decade old Thinkpad that suddenly became my new work laptop once more thanks to an SSD. It's a true shame they simply don't make them like this anymore.
I owe much of my career to an SSD. I had a work laptop that I upgraded myself with an 80GB Intel SSD, which was pretty exotic at the time. It was so fast at grepping through code that I could answer colleagues’ questions about the code in nearly real time. It was like having a superpower.
Just before I installed an SSD was the last time I owned a computer that felt slow.
When Alder Lake finally made a sizable jump, I looked at decades of old tests I'd done along the way with CPUs and tried to bridge them together reasonably.
Between IPC (~50 to 100-fold improvement) and clock speed increases (1000-fold alone), I estimated that single-thread performance has increased on the order of 50,000x - 100,000x since the 4.77 MHz 8088.
In human terms this is like one minute compared to one month!
Then Windows 11 came along to slow everything down and we are back to the stoneage.
Got recently a new surface laptop at work - windows 11 gives me the same feeling I had from Vista. Hilarious how modern computers are more powerful than ever, but windows 11 now feels worse than windows 7 ten years ago.
I think the single biggest jump I ever experienced was my first dedicated GPU — a GeForce 2 MX if I'm not mistaken.
Agreed. That was the next big boost! I installed my first SSD in this HP workstation-grade laptop that we got "for free" from college. It was like getting a brand new computer! In fact, I ended up giving that computer to my sister who ran it into the ground.
I didn't feel any huge speed boosts like that until the M1 MacBook in 2020.
GPUs for 3d graphics were a game changer.
I can see why you wouldn’t consider it as impactful if you weren’t into gaming at the time.
My first pentium was clocked at 60Mhz.
That wasn't how it worked.
Up until the 486, the clock speed and bus speed were basically the same and topped out at about 33MHz (IIRC). The 486 started the thing of making the CPU speed a multiple of the bus speed eg 486dx2/66 (33MHz CPU, 66MHz bus), 486dx4/100 (25MHz CPU, 100MHz bus). And that's continued to this day (kind of).
But the point is the CPU became a lot faster than the IO speed, including memory. So these "overdrive" CPUs were faster but not 2-4x faster.
Also, in terms of impact, yeah there was a massive incrase in performance through the 1990s but let's not forget the first consumer GPUs, namely 3dfx Voodoo and later NVidia and ATI. Oh, Matrox Millenium anyone?
It's actually kind of wild that NVidia is now a trillion dollar company. It listed in 1998 for $12/share and adjusted for splits, Google is telling me it's ~3700x now.
You got your multipliers backwards with the 486dx. The multipliers was on the CPU core rather than the bus. A dx2 was twice the memory bus speed. The dx4 was (confusingly) three times the bus speed. So a 486dx4/100 was a 33MHz bus with a 100MHz core.
I remember our school getting new computers to replace the 233Mhz G3 iMac computer lab during the Megahertz Wars and the vice principal announcing the purchase of new "screaming fast" 600 Mhz Dell Optiplex GX100. The nice thing is that the G3 iMacs then got pushed out to the classrooms, but it was sad to see Apple lose the spot in the lab. I miss the wonder of playing Pangea Software games for the first time like Bugdom and Nanosaur.
I don't know. I felt this way when switching from Intel laptop to Apple M1. I am still using it today and I prefer it over desktop PC.
I also went from an Intel MacBook Pro to an M1 and appreciated it, but that leap was exaggerated by how bad the last few generations of Intel MacBook Pros were.
The Apple Silicon chassis was allowed to finally house an appropriate cooling solution, too. They are much quieter than the same Intel laptops when dissipating the same power levels.
Have you ever used proper desktop computers? I suppose such a move would feel significant if you've mostly been using laptops.
But that's the thing; a laptop is fundamentally different. Of course if there's the equivalent of a thermopump under my desk I'm going to get crazy performance. The magic was that Apple brought the uncompromised experience to a laptop.
> The magic was that Apple brought the uncompromised experience to a laptop.
Apple’s power efficiency was a great bump forward, but the performance claims were a little exaggerated. I love my Apple Silicon devices but I still switch over to a desktop for GPU work because it’s so much faster, for example.
Apple had that famously misleading chart where they showed their M1 GPU keeping pace with a flagship nVidia card that misled everyone at launch. In practice they’re not even close to flagship desktop accelerators, unfortunately.
They have excellent idle power consumption though. Great for a laptop.