Congrats on the launch. I never understood why an AI meeting notetaker needed sota LLMs and subscriptions (talking about literally all the other notetakers) - thanks for making it local first. I use a locally patched up whisperx + qwen3:1.7 + nomic embed (ofcourse with a swift script that picks up the audio buffer from microphone) and it works just fine. Rarely i create next steps / sop from the transcript - i use gemini 2.5 and export it as pdf. I’ll give Hyprnote a try soon.

I hope, since it’s opensource, you are thinking about exposing api / hooks for downstream tasks.

> I never understood why an AI meeting notetaker needed sota LLMs and subscriptions

I’m the opposite: If something is expected to accurately summarize business content, I want to use the best possible model for it.

The difference between a quantized local model that can run on the average laptop and the latest models from Anthropic, Google, or OpenAI is still very significant.

For summarizing context, it's not that far off. I've summarized notes using Claude Sonnet 3.7 and Qwen3 8b, and there a difference, but not huge.

Can you share the Swift script? I was thinking of doing something similar but was banging my head against the audio side of macOS.

What kind of API/Hooks you expect us to expose? We are down to do that.

The ability to receive live transcripts from a webhook, including speaker diarization metadata would be super useful.

webhook to the localhost server, right?

registering an MCP server and calling an MCP tool upon transcript completion (and/or summary completion) would help (check out actionsperminute.io for the vision there).

Calendar integration would be nice to link transcripts to discrete meetings.

That makes sense.

Please add more details here: https://github.com/fastrepl/hyprnote/issues/1203

For calendar, we have native Apple Calendar integration in MacOS.