Not sure why I should use this instead of the baked-in OS dictation features (which I use almost daily--just double-tap the world key, and you're there). What's the advantage?

I haven't used this one but WisprFlow is vastly better than the built-in functionality on MacOS. Apple is way behind even startups, even for fundamental AI functionality like transcribing speech

WisprFlow has a lot of good recommendations behind it but the fact they used Delve for SOC2 compliance gives me major pause.

[deleted]

The fact that a company could slurp up all of your data and then use Delve for their SOC2 is a great reason to use local models.

I use the baked in Apple transcription and haven't had any issues. But what I do is usually pretty simple.

What makes the others vastly better?

I’ve rarely had macOS TTS produce a sentence I didn’t have to edit

Whisper models I barely bother checking anymore

- Way more accurate, especially with technical jargon. Try saying JSON as part of a sentence to macOS dictation and see what comes out.

- macOS dictation mutes other sounds while it's running. This is a deal-breaker for me.