In an ideal world, Apple would have released a Mac Pro with card slots for doing this kind of stuff.
Instead we get gimmicks over Thunderbolt.
In an ideal world, Apple would have released a Mac Pro with card slots for doing this kind of stuff.
Instead we get gimmicks over Thunderbolt.
I can imagine Apple shipping Mac Pros with add-ons that allows running local inference with minimal setups. "Look, just spend $50k on this machine and you get a usable LLM server that can be shared for a team." But they don't seem particularly interested in that market.