A simple calculator that estimates how many concurrent requests your GPU can handle for a given LLM, with shareable results.
I see it doesn't take GPU performance into consideration when showing the estimates. H100 and A100 are performing the same. Am I doing it wrong?
I also added a Mac version: https://selfhostllm.org/mac/ so you can know which models you can run on your Mac and get an estimated tokens/sec.
Very useful, thanks. I'm missing a reset button though.
I see it doesn't take GPU performance into consideration when showing the estimates. H100 and A100 are performing the same. Am I doing it wrong?
I also added a Mac version: https://selfhostllm.org/mac/ so you can know which models you can run on your Mac and get an estimated tokens/sec.
Very useful, thanks. I'm missing a reset button though.