Something like this? (Llama 3.1-8B etched into custom silicon delivering 16,000 tok/s, doesn't use much PCIe bandwidth):
- https://taalas.com/the-path-to-ubiquitous-ai/ - https://chatjimmy.ai/
Something like this? (Llama 3.1-8B etched into custom silicon delivering 16,000 tok/s, doesn't use much PCIe bandwidth):
- https://taalas.com/the-path-to-ubiquitous-ai/ - https://chatjimmy.ai/
Wowsa that’s amazing! Exactly what I was imagining. To do that with 2500 watts is incredible.