Thanks for sharing the details—that makes a lot of sense. Fine-tuning and exporting models on-device can be tedious nowadays. We’re planning to look into supporting popular on-device LLM models more directly, so deployment feels much easier. We'll let you know here or reach out to you once we have something