For FP16-native training of 100B+ models, you will probably still be offloading to swap unless you've got a $150,000 RDMA Mac Studio cluster. The workload would be deeply compute-constrained if you could fit it in-memory anyways.
For FP16-native training of 100B+ models, you will probably still be offloading to swap unless you've got a $150,000 RDMA Mac Studio cluster. The workload would be deeply compute-constrained if you could fit it in-memory anyways.