I also added a Mac version: https://selfhostllm.org/mac/ so you can know which models you can run on your Mac and get an estimated tokens/sec.