When you activate it you agree that your voice input is sent to Apple. As far as I understand this project runs fully locally. Up to you to decide for whatever suits your needs best.
No, that is not correct. It is running one hundred percent local. You can try it by turning off internet on your phone and try running it then. However, the built in model isn't as good, so this is probably better.
When you activate it you agree that your voice input is sent to Apple. As far as I understand this project runs fully locally. Up to you to decide for whatever suits your needs best.
Where did you get from that the voice input is sent to Apple / the cloud?
As far as I understand Apple’s voice model runs locally for most languages.
Siri commands can be used for training, but is also executed locally and sent to Apple separately (and this can be disabled).
I couldn't believe it either but when you enable it the settings of macOS you get this popup:
> When you dictate text, information like your voice input and contact names are sent to Apple to help your Mac recognize what you’re saying.
No, that is not correct. It is running one hundred percent local. You can try it by turning off internet on your phone and try running it then. However, the built in model isn't as good, so this is probably better.
Nothing comes close to LLM transcription though. I just tried this. I said "globe key dictation, does this work?". Here's the transcription, verbatim:
"Fucking dictation, does this work"
yup, this is how I 'type'