Wait, I'm confused. The text here says all data remains on device and emphasises how much you can trust that, that you're obsessed with local-first software, etc. Clicking on the demo video, step one is... configuring access tokens for external services? Are the services shown at 0:21 (Groq, OpenAI, Antrophic, Google, ElevenLabs) doing the actual transcription, listening to everything I say, and is only the resulting text that they give us subject to "it all stays on your device"? Because that's not at all what I expected after reading this description
Great catch Aachen, I should have clarified this better. The app supports both external APIs (Groq, OpenAI, etc.), and more recently local transcription (via whisper.cpp, OWhisper, Speaches, etc.), which never leaves your device.
Like Leftium said, the local-first Whisper C++ implementation just posted a few hours ago.
The local transcription feature via whisper.cpp was just released 2 hours ago: https://github.com/epicenter-so/epicenter/releases/tag/v7.3....
> All your data is stored locally on your device, and your audio goes directly from your machine to your chosen cloud provider (Groq, OpenAI, ElevenLabs, etc.) or local provider (Speaches, owhisper, etc.)
Their point is they aren’t a middleman with this, and you can use your preferred supplier or run something locally.
The issue is
> All your data is stored locally on your device,
is fundamentally incapable with half of the following sentence.
I'd write it as
> All your data is stored locally on your device, unless you explicitly decide to use a cloud provider for dictation.
Great correction, wish I could edit the post! Updated the README to reflect this.
We've edited the top text to make this clearer now. Thanks for pointing this out!