Congrats on the launch! I'm very bullish on how powerful <10B-param models are becoming, so the on-device angle is cool (and great for your bottom line too, as it's cheaper for you to run).

Something that I think is interesting about AI note taking products is focus. How does it choose what's important vs what isn't? The better it is at distinguishing the signal from the noise, the more powerful it is. I wonder if there is an in-context learning angle here where you can update the model weights (either directly or via LoRA) as you get to know the user better. And, of course, everything stays private and on-device.

> How does it choose what's important vs what isn't?

The idea of Hyprnote is that you write chicken-scratch-raw note during the meeting(what you think is important), and AI enhance based on it.

On-device learning is interesting too. For example, Gboard: https://arxiv.org/abs/2305.18465

And yes - we are open to this too