I love that you're doing this, Tananon.

We've been using Candle and Cudarc and having a fairly good time of it. We've built a real time drawing app on a custom LCM stack, and Rust makes it feel rock solid. Python is way too flimsy for something like this.

The more the Rust ML ecosystem grows, the better. It's a little bit fledgling right now, so every little bit counts.

If llama.cpp had instead been llama.rs, I feel like we would have had a runaway success.

We'll be checking this out! Kudos, and keep it up!

Awesome to hear! It's great to see the Rust ML ecosystem growing, and we hope we can be a small part of it. Don't hesitate to reach out with any ideas or requests!