Although TimeCapsule is more than decade old, it serves nicely with TimeMachine (automatic backups). Sad to see that going away permanently for Apple Silicon.
Although TimeCapsule is more than decade old, it serves nicely with TimeMachine (automatic backups). Sad to see that going away permanently for Apple Silicon.
"Dropping support for things just because they are old" is typical commercial software behavior. I can run the latest Linux kernel and still have access to an internal floppy disk drive if I wanted to, yet billion dollar companies can't seem to manage to support 10 year old stuff.
I still am sore from when I "upgraded" macOS and suddenly support for my 1080i TV was gone. Yesterday it worked fine, today it's gone. All because they can't be bothered to maintain a code path.
The economics make the reasoning obvious, though.
With closed source IP, every bit of support, from bug fixes, to feature requests, to compatibility fixes to integrate with newer mainline/foundational tooling, costs money.
With open source projects (and in particular ones like Linux where there's a huge number of contributors and interested parties), support for would-be niche facilities can keep going as long as there's someone with the knowledge and spare time to do it.
AFAIK, Linux has a policy that any change you make must not break existing kernel features, and if it does, you have to fix them yourself.
With that said, kernel maintainers have recently indicated that some unused subsystems are likely to be removed soon, as AI is now finding (real) security vulnerabilities in them that nobody is willing to fix.
> The economics make the reasoning obvious, though.
Looking through Apple’s financial statements, they theoretically could support these old systems. I’m not saying a cut doesn’t make sense, but just that economics-wise they could keep one guy for it
There’s also a halo effect when support extends for a longer-than-typical product life that gives a sense of commitment to a platform.
There's somewhere in the ballpark of 166,000 employees at Apple, just unfathomable scale [1]. It is not unreasonable to ask that someone specific is responsible for each particular small feature and ensuring it keeps working. Trying to apply an economic analysis to such a "free as in beer" operating system does not seem to work well. Consider the question of "how many small holes can you have in your wooden sailing ship"?
[1] https://stockanalysis.com/stocks/aapl/employees/
Not that it impacts your argument significantly, but for the sake of completeness, Apple employs a huge number of retail employees.
Yes. A more useful number would be how many employees are working on macOS specifically. Hard to find a definitive number for that.
Less than 1% of that number. Of course this is hard to actually count properly since there is a lot of shared work across platforms.
It’s not unreasonable to ask but they can and are saying “no”.
Ideally, at a certain point, you'd have some sort of upstream FLOSS project where you could let John Q. Public do that sort of low-level, maintenance-only stuff, while the proprietary "value adds" are closed source, until it becomes financially attractive to FLOSS them.
IIRC, that could exist for MacOS in the form of Darwin.
> With open source projects (and in particular ones like Linux where there's a huge number of contributors and interested parties), support for would-be niche facilities can keep going as long as there's someone with the knowledge and spare time to do it.
And that increasingly gets difficult to do. i386 support went down the drain in the kernel in 2012, i486 is probably going down the drain as well this year [1] and soon-ish another bunch of really really old stuff will go as well because it isn't maintained [2] - good luck finding someone still running IPX networks or ISDN hardware.
[1] https://www.theregister.com/2026/04/06/patch_to_end_i486_sup...
[2] https://lwn.net/Articles/1068928/
The economics make the reasoning obvious, though
These arguments fall apart when you remember that Apple has several trillion dollars at hand. It's not some shoestring startup.
Ironic, considering Linux is dropping a LOT of old devices from 7.1
It's my understanding that those are (mostly?) devices where they legitimately have reason to believe there are zero users. In particular, there's a pattern where someone will discover that Linux has a driver that hasn't actually worked for a long time, and nobody's complained, so then they remove it.
I'm not suggesting they keep it all... just ironic as a statement considering Linux is literally removing a bit lately... <= 486, the bus drivers for mice, etc.
I'm mostly okay cleaning out a lot of legacy and unsupported devices. In some ways, and for people who want to support really old hardware it may not be great, but they're most likely stuck on older versions for other reasons.
I don't think it is ironic, though; Linux isn't "Dropping support for things just because they are old", it's dropping unused things when they cause code quality problems. That's rather different than features being dropped because the vendor doesn't want to bother supporting them even though they still worked and have active users.
Feetures being dropped because nobody wants to support them is a prominent feature of free software. That's part of "no warranty". If it does bother you, you're supposed to step up to support it yourself, or pay someone to.
Okay, but that's the exact opposite of what we're discussing here? Linux, which is free software, isn't dropping features because nobody wants to support them, but because nobody's using them. Meanwhile, macOS, developed as a commercial product and with a much weaker showing of open source or even source availability, is dropping features because Apple doesn't want to support them.
> Linux, which is free software, isn't dropping features because nobody wants to support them, but because nobody's using them.
I disagree. They are dropping support because nobody is maintaining them. There may very well be people still using these features, but they haven't been motivated or aren't properly skilled to offer to maintain them going forward, and haven't motivated some other skilled person via payments.
Rather, the core difference is that Apple does not offer a way to have external people take over providing support.
If anybody would care to keep these drivers up, it would be easy to revive them as kernel modules. It's not that Linux is going to lose an upstream interface to publish events from a bus mouse.
Support for 486 is another thing, but, frankly speaking, running a modern Linux kernel on a 486 makes no sense, either form a practical or preservationist / museum perspective.
Absolutely--Linux is by no means perfect.
What is the age of the 486SX code vs the code paths Apple is removing right now?
Just this week we've seen Linux talking about dropping support for some older hardware precisely because attacks against it were becoming easier with LLMs.
Do you have a detailed source for this? I want to read more about it.
Because I noticed my old Core 2 Quad PC with Nvidia 8600GT that my parents use as their email and Facebook machine, doesn't boot with any linux newer than Kernel 6.1 even though I can get Windows 11 to boot on it.
So the myth around "Linux is great for old PCs", highly depends on what HW you have.
> even though I can get Windows 11 to boot on it
But by modifying it right? Because the core 2 does not support SSE4.2
Sounds like an Nvidia driver module issue more than anything else. If I had to guess, simply removing the Nvidia module should fix that and still get you video through one of the various backup paths (opennuveau etc)
You can run no-mode-set to get video output at boot/installation phase but then you're stuck with 800x600. That's with the FOSS nouveau driver in the kernel.
There's no fixes that I could find. My LLM research says nouveau dropped support for that Nvidia architecture on newer kernels. Bummer.
Ok what do you suggest? Every feature ever written should be supported in perpetuity even if 3 people are using it? Clearly you didn't think this through. Should 2026 computers have a ISA interface as well?
Supporting old hardware and software has a substantial cost that only grows exponentially. Companies exist to print money, not to cater to the smallest niches.
It would be great if they could support things, but I most definitely understand why they don't.
macOS Tahoe still has floppy drive support.
Really? Like actual internal floppy drives, and not just USB floppy drives (which even Windows still supports)?
I actually wouldn't expect macOS to support actual floppy drives since the OS's list of supported devices doesn't include any that shipped with floppy drives. The fact that I cannot install the latest macOS on any devices older than 2019 is a related, but separate problem.
In this case, what would internal floppy drive mean? The last Macs with floppy drives (I think Old World G3s?) used a custom Apple controller, integrated into the chipset, with a bespoke 20-pin cable.
Even on the old world G3s, Mac OS X never had floppy drive support. There was a driver someone had ported from BSD you could install.
Yes! And Zip Disk support. I have an app that has to detect different external media types and have a pile of old drives that work just fine.
USB floppy drives indeed.
A USB floppy drive behaves almost identically to a USB hard drive-yet another SCSI block device. The cost of keeping support for them is minimal
This is very different from legacy PC floppy drive controllers which spoke a completely different protocol, which was very complex and full of footguns
Legacy floppy controllers also had various legacy features almost nobody used, like soft deletion of sectors (IBM added this in the 70s for use with primitive database systems), or attaching tape drives using the floppy interface (nowadays if you buy a brand new tape drive, the interface options are SAS or Fibre Channel)
And soon I won't be able to run old 32bit binaries with the latest Linux Kernel. We all move on.
Umm no?
> There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost. Please, he asked, do not run 32-bit kernels on 64-bit processors.
https://lwn.net/Articles/1035727/
> "Dropping support for things just because they are old" is typical commercial software behavior.
You are deluding yourself if you think open source folks are better. You can't compile and run a modern version of GCC on Solaris 10 on SPARC, for example. And we just had a story here last week about removal of bus mouse support. It's only a mild exaggeration to say that lots of folks will check the commit activity on github and of a project doesn't have commits this week it should be banned from the internet and the universe.
Then you have the problem that many dev tools are not forward compatible. CMake is a huge issue. An ubuntu system from 2020 has CMake on it, but it won't compile anything that uses CMake that was released in recent years because the cmakefiles are incompatible.
CMake is a bad example, you can build latest CMake and run it on Debian Jessie. It will work perfectly. CMake is the thing you can build on really old compilers.
Open source is better because as long as you have a single developer caring to maintain the device, it will still be there.
Bus mouse support isn't removed because it's old but because it's been broken since 2015 and nobody noticed.
Open source is better because if you need the device driver then you can step up to maintain it yourself. It doesn't mean someone else will magically do it for you. I've used devices with very obscure incantations to get some random person's hack to run on Linux that worked natively on Windows.
Given the mtbf of disks, I wouldn’t risk doing backups on a device discontinued in 2018.
It may not be the easiest surgery in the world, but you can replace the hard drive in a Time Capsule. You'll probably want to replace the power supply too after this much time
Disks can be replaced.
wasn't it capped at 3tb? is the drive swappable to something bigger? They discontinues them in 2018, the wifi in them is old, single disk (no raid).. better to just pick up a multidrive nas or use cloud backups. What we should be asking for is timemachine backends for cloud providers.
It's not "officially" supported, but iFixit has a guide for swapping the drive on a time capsule. I used mine with a 4TB drive for years with no trouble.
Sure, but still just a single drive.
My old trusty readynas should still work i think.. probalby. Supports smd for time machine and smb3 generally. If it doesn't I might finally be pushed onto a nas that isn't discontinued.
I had an early ReadyNAS that was a champ for years. I wonder if the fact that it was based on SPARC had anything to do with its longevity.
the one i have is my second readynas.. its a later one and is x86 but it's still kickin'. The first failed suddenly so i bought the second hoping to migrate the disks, but they changed the architecture so that wouldn't work. I determined that all that happened to the first was that the power supply gave up. Sourced one from ebay and it was back to working but i went ahead and did a migration then gave the old one to a friend. It's apparently also still doing just fine.
From a risk assessment standpoint, I’ve seen my Time Machine backups corrupted much more frequently than I’ve experienced drive failure. Happened with both my Time Capsule and then my Synology RAID.
It’s a “nice to have” automatic backup, but not a primary backup destination for me.