Great to find this narrow focused thing:
> We support the following backends:
Metal is our primary target. Starting from MacBooks with 96GB of RAM.
NVIDIA CUDA with special care for the DGX Spark.
AMD ROCm is only supported in the rocm branch. It is kept separate from main
since I (antirez) don't have direct hardware access, so the community rebases
the branch as needed.
> This project would not exist without llama.cpp and GGML, make sure to read the acknowledgements section, a big thank you to Georgi Gerganov and all the other contributors.Edit: aww, doesn't seem to support offloading to system RAM[0] (yet)
[0] https://github.com/antirez/ds4/issues/108
Guess I'll have to keep watching the llama.cpp issue[1]
> AMD ROCm is only supported in the rocm branch.
Has anybody tried it? There is a lot of emphasis on MacBook Pro in this thread, but I would like to use it with an AMD Halo Strix with 128GB of unified RAM.
If only you could still buy Mac's with that much RAM
You can buy 128GB M5 MacBook Pros?
Configured one just now, delivers in 2 weeks
Interesting there were news last week or so of apple removing Mac minis options.
They removed the baseline 8GB RAM/256GBstorage model. My bet is with increased RAM prices the markup on the lower end is not enough to still make a profit