> There's no reason we can't be writing code that lasts 100 years. Code is just math.
In theory, yes. In practice, no, because code is not just math, it's math written in a language with an implementation designed to target specific computing hardware, and computing hardware keeps changing. You could have the complete source code of software written 70 years ago, and at best you would need to write new code to emulate the hardware, and at worst you're SOL.
Software will only stop rotting when hardware stops changing, forever. Programs that refuse to update to take advantage of new hardware are killed by programs that do.
This is a total red herring, x86 has over 30 years of backwards compatability and the same goes for the basic peripherals.
The real reason for software churn isn't hardware churn, but hardware expansion. It's well known that software expands to use all available hardware resources (or even more, according to Wirth's law).
30 years ago, right before Windows 95 came out, Windows was a 16-bit OS and the modern versions of Windows no longer support 16-bit programs. PCIe came out only in 2003, and I don't know that PCIe slots can support PCI. SATA is also from 2003. Even USB originally came out in 1996, and the only pre-USB connector slot I have on my computer is a PS/2 port (which honestly surprises me). For monitor connections, VGA and DVI (1999!) have died off, and their successors (HDMI, DisplayPort) are only in the 2000's.
So pretty much none of the peripherals--including things like system memory and disk drives, do note--from a computer in 1995 can talk using any of the protocols a modern computer supports (save maybe a mouse and keyboard) and require compatibility adapters to connect, while also pretty much none of the software works without going through custom compatibility layers. And based on my experience trying to get a 31-year old Win16 application running on a modern computer, those compatibility layers have some issues.
PCIe is mostly backwards compatible with PCI, and bridge chips used to be quite common. ISA to PCI is harder, but not unheard of.
"SATA" stands for "serial ATA", and has the same basic command set as the PATA from 1984 - bridge chips were widely used. And it all uses SCSI, which is also what USB Mass Storage Devices use. Or if you're feeling fancy, there's a whole SCSI-to-NVMe translation standard as well.
HDMI is fully compatible with single-land DVI-D, you can find passive adapters for a few bucks.
There's one port you forgot to mention: ethernet! A brand-new 10Gbps NIC will happily talk with an ancient 10Mbps NIC.
It might look different, but the pc world is filled with ancient technology remnants, and you can build some absolutely cursed adapter stacks. If anything, the limiting factor is Windows driver support.
Slight caveat that a lot of Ethernet PHYs > 1G don't go down to 10 Mb, my some don't go to 100 Mb, and some are only the speed they want to be (though luckily that's not very common). There exist 6-speed PHYs (10,100,1000,2500,5000,10000) but that doesn't mean everything will happily talk
You're confusing quite a few things together.
The basic peripherals (keyboard and monitor) of today still present the same interface as they did back in the IBM PC era. Everything else is due to massive hardware expansion, not hardware churn.
How often do you update your drivers compare to your typical internet connected app? Software that handles the idiosyncrasies of the hardware (aka drivers) generally has a much longer lifespan than most other software; I don't see how you can reasonably say hardware breaking backwards compatibility is why software keeps changing.
Python programs do not care about SATA/PCI-E.
Python programs run on an interpreter, which runs on an OS, which has drivers that run on a given piece of hardware. All of the layers of the stack need to be considered and constantly maintained in order for preservation to work.
Some do, most don't. (Don't generalize)
Try running software from 1995 on a brand new system and you'll find all sorts of fun reasons why it's more complicated than that.
I don’t think I can take that claim by itself as necessarily implying the cause is hardware. Consumer OSes were on the verge of getting protected memory at that time, as an example of where things were, so if I imagine “take an old application and try to run it” then I am immediately imagining software problems, and software bit rot is a well-known thing. If the claim is “try to run Windows 95 on bare metal”, then…well actually I installed win98 on a new PC about 10 years ago and it worked. When I try to imagine hardware changes since then that a kernel would have to worry about, I’m mostly coming up with PCI Express and some brave OEMs finally removing BIOS compatibility and leaving only UEFI. I’m not counting lack of drivers for modern hardware as “hardware still changes” because that feels like a natural consequence of having multiple vendors in the market, but maybe I could be convinced that is a fundamental change in and of itself…however even then, that state of things was extremely normalized by the 2000s.
Drivers make up a tiny portion of the software on our computer by any measure (memory or compute time) and they're far longer lasting than your average GUI app.
On the other hand, the main reason why Y2K happened was because a lot of major orgs would rather emulate software from the 60s forever than rewrite it. I'm talking like ancient IBM mainframe stuff, running on potentially multiple layers of emulation and virtualization.
We rewrite stuff for lots of reasons, but virtualization makes it easy enough to take our platforms with us even as hardware changes.
Pretty sure if I downloaded and compiled Tcl/Tk 7.6.x source code on a modern Linux box, it would run my Tcl/Tk 7.6.x "system monitor" code from 1995 or 1996 just fine.
Do you have any examples that aren't because of the OS (as in, not trying to run a 90's game on Windows 11) or specialized hardware (like an old Voodoo GPU or something)?
The whole point is that everything changes around software. Drivers, CPUs, GPUs, web browsers, OSs, common libraries, etc. Everything changes.
It doesn't matter if x86 is backwards compatible if everything else has changed.
No code can last 100 years in any environment with change. That's the point.
If you restrict yourself to programs that don't need an OS or hardware, you're going to be looking at a pretty small set of programs.
I don't, but I do restrict that you run it on the same OS as it was designed for.
Backwards-compatibility in OSes is the exception, not the rule. IBM does pretty well here. Microsoft does okay. Linux is either fine or a disaster depending on who you ask. MacOS, iOS, and Android laugh at the idea. And even the OSes most dedicated to compatibility devote a ton of effort to ensuring it on new hardware.
x86 doesn't have magical backwards compatibility powers.
The amazing backwards compatibility of Windows is purely due to the sheer continuous effort of Microsoft.
> x86 doesn't have magical backwards compatibility powers.
I never said it did; other ISAs have similar if not longer periods of backwards compatability (IBM's Z systems architecture is backwards compatible with the System/360 released in 1964).
> The amazing backwards compatibility of Windows is purely due to the sheer continuous effort of Microsoft.
I never mentioned Windows but it's ridiculous to imply its backwards compatability is all on Microsoft. Show me a single example of a backwards breaking change in x86 that Windows has to compensate for to maintain backwards compatability.
>I never mentioned Windows but it's ridiculous to imply its backwards compatability is all on Microsoft.
I never said that. Windows was just an easy example.
>Show me a single example of a backwards breaking change in x86 that Windows has to compensate for to maintain backwards compatability.
- The shift from 16-bit to 32-bit protected mode with the Intel 80386 processor that fundamentally altered how the processor managed memory.
- Intel 80286 introduced a 24-bit address bus to support more memory, but this broke the address wraparound behavior of the 8086.
- The shift to x86-64 that Microsoft had to compensate with emulation and WOW64
Any many more. That you think otherwise just shows all the effort that has been done.
> The shift from 16-bit to 32-bit protected mode with the Intel 80386 processor that fundamentally altered how the processor managed memory.
I said x86 has "over 30 years of backwards compatibility". The 80386 was released in 1985, 40 years ago :)
> Intel 80286 introduced a 24-bit address bus to support more memory, but this broke the address wraparound behavior of the 8086.
This is the only breaking change in x86 that I'm aware of and it's a rather light one as it only affected programs relying on an exactly 2^16 memory space. And, again, that was over 40 years ago!
> The shift to x86-64 that Microsoft had to compensate with emulation and WOW64
No, I don't think so. A x86-64 CPU starts in 32 bit mode and then has to enter 64 bit mode (I'd know, I spent many weekends getting that transition right for my toy OS). This 32 bit mode is absolutely backwards compatible AFAIK.
WOW64 is merely a part of Microsoft's OS design to allow 32 bit programs to do syscalls to a 64 bit kernel, as I understand it.
The bare minimum cost of software churn is the effort of one human being, which is far less than hardware churn (multiple layers of costly design and manufacturing). As a result, we see hardware change gradually over the years, while software projects can arbitrarily deprecate, change, or remove anything at a whim. The dizzying number of JS frameworks, the replacement of X with Wayland or init with systemd, removal of python stdlib modules, etc. etc. have nothing to do with new additions to the x86 instruction set.
> and computing hardware keeps changing.
Only if you can't reasonably buy a direct replacement. That might have been a bigger problem in the early days of computing where people spread themselves around, leaving a lot of business failures and thus defunct hardware, but nowadays we all usually settle on common architectures that are very likely to still be around in the distant future due to that mass adoption still providing strong incentive for someone to keep producing it.
TeX is written in a literate programming style which is more akin to a math textbook than ordinary computer code, except with code blocks instead of equations. The actual programming language in the code blocks and the OS it runs on matters a lot less than in usual code where at best you get a few sparse comments. Avoiding bit rot in such a program is a very manageable task. In fact, iirc the code blocks which end up getting compiled and executed for TeX have been ported from Pascal to C at some point without introducing any new bugs.
The C version of TeX is also terrible code in the modern day (arbitrary limits, horrible error handling, horrible macro language, no real Unicode support, etc. etc), hence LuaTeX (et al.) and Typst and such.
The backward-compat story is also oversold because, yes, baseline TeX is backward compatible, but I bet <0.1% of "TeX" document don't use some form of LaTeX and use any number of packages... which sometimes break at which point the stability of base TeX doesn't matter for actual users. It certainly helps for LaTeX package maintainers, but that doesn't matter to users.
Don't get me wrong, TeX was absolutely revolutionary and has been used for an insane amount of scientific publishing, but... it's not great code (for modern requirements) by any stretch.
This is correct when it comes to bare metal execution.
You can always run code from any time with emulation, which gives the “math” the inputs it was made to handle.
Here’s a site with a ton of emulators that run in browser. You can accurately emulate some truly ancient stuff.
https://www.pcjs.org/
Given how mature emulation is now why couldn't that just continue to be possible into the future?
Each new layer of emulation is new code that needs to be written that wasn't required when the original program in question was written. It's a great approach for software preservation, but the fact that it's necessary shows why the approach of "if it ain't broke, don't fix it" doesn't work. The context of computing is changing around us at all times, and hardware has a finite lifespan.
Eh. Emulators are often tiny in comparison to the programs they emulate. Especially when performance isn't so much of a concern - like when you're emulating software written for computers from many decades ago. A good emulator can also emulate a huge range of software. Just look at programs like dosbox and the like. Or Apple's great work with Rosetta and Rosetta2 - which are both complex, but much less complex than all the software they supported. Software like Chrome, Adobe Photoshop and the Microsoft office suite.
Arguably modern operating systems are all sort of virtual machine emulators too. They emulate a virtual computer which has special instructions to open files, allocate memory, talk to the keyboard, make TCP connections, create threads and so on. This computer doesn't actually exist - its just "emulated" by the many services provided by modern operating systems. Thats why any windows program can run on any other windows computer, despite the hardware being totally different.
Or get an IBM 360 and have support for the next two thousand years, which is the choice our parents made.
This is possible, and ubiquitous. Your terminal runs on an emulator of an emulator of a teletype.