Not sure why you'd want this over an apple setup. M4 max is 545GB/s of memory bandwidth - $2k for an entire Mac Studio with 48GB of RAM vs 32 for the B70.
Not sure why you'd want this over an apple setup. M4 max is 545GB/s of memory bandwidth - $2k for an entire Mac Studio with 48GB of RAM vs 32 for the B70.
Being able to keep infrastructure on Linux is a big advantage.
How many compatibility issues is MacOS realistically expected to spur? Windows DX felt unusable to me without a Linux VM (and later WSL), but on MacOS most tooling just kinda seems to work the same.
It’s not the tooling for me, macOS is just bad as a server OS for many reasons. Weird collisions with desktop security features, aggressive power saving that you have to fight against, root not being allowed to do root stuff, no sane package management, no OOB management, ultra slow OS updates, and generally but most importantly: the UNIX underbelly of macOS has clearly not been a priority for a long time and is rotting with weird inconsistent and undocumented behaviour all over the place.
> Weird collisions with desktop security features
Linux is not immune to BIOS/UEFI firmware attacks either. Secure Boot, TPM, and LUKS can work well together, but you still depend on proprietary firmware that you do not fully control. LogoFAIL is a good example of that risk, especially in an evil maid scenario involving temporary physical access. I think Apple has tighter control over this layer.
Yeah... attacks like LogoFAIL hit during the DXE and BDS phases when the firmware is acting as its own 'mini OS' before the handoff
Easier to comprehend here - https://vectree.io/c/uefi-firmware-architecture-principles
Provisioning, remote management, containers, virtualization, networking, graphics (and compute), storage, all very different on Mac. The real question is what you would expect to be the same.
For server usage? macOS is the least-supported OS in terms of filesystems, hardware and software. It uses multiple gigabytes of memory to load unnecessary user runtime dependencies, wastes hard drive space on statically-linked binaries, and regularly breaks package management on system upgrades.
At a certain point, even WSL becomes a more viable deployment platform.
My thinking is that I'd pick this, because I can't just plug a Mac into a slot in my server and have it easily integrate with all my other hardware across an ultra fast bus.
If they made an M4 on a card that supported all the same standards and was price competitive, though, that might be a good option.
with those $2k you can have 2xB70, with 1.2Tb/sec and 64G Vram, on linux ( and you can scale further while mac prices increase are not linear 0
You're absolutely right. And these Intel GPUs will also be much faster in terms of actual math than the M series GPUs that the Apple setup would have.
Because the B70 cards can pipeline 500 tok/s on concurrent workloads. Apple Silicon and Nvidia consumer cards only work well w/ serial workloads.
Support for Single Root IO Virtualization (SR-IOV) to enable compute and Graphics workloads in virtualized environments.
Funny, I not sure why anyone would use Apple over Linux.
one can upgrade and swap parts with a computer running an Intel GPU. Linux is very well supported compared to Mac hardware.