Was searching for this this morning and settled on https://handy.computer/

Big fan of handy and it’s cross platform as well. Parakeet V3 gives the best experience with very fast and accurate-enough transcriptions when talking to AIs that can read between the lines. It does have stuttering issues though. My primary use of these is when talking to coding agents.

But a few weeks ago someone on HN pointed me to Hex, which also supports Parakeet-V3 , and incredibly enough, is even faster than Handy because it’s a native MacOS-only app that leverages CoreML/Neural Engine for extremely quick transcriptions. Long ramblings transcribed in under a second!

It’s now my favorite fully local STT for MacOS:

https://github.com/kitlangton/Hex

I installed a few different STT apps at the same time that used Parakeet and I think they disagreed with each other. But Hex otherwise would’ve won for me I think. Wanna reformat the Mac & try again (been a while anyway).

My comment on this from a month back: https://news.ycombinator.com/item?id=46637040

Hex is great and not trying to pull you away from them - would love to get your pov when you give these a spin next time. email or DM me

I was having the same journey but landed on https://github.com/hoomanaskari/mac-dictate-anywhere

I just learned about Handy in this thread and it looks great!

I think the biggest difference between FreeFlow and Handy is that FreeFlow implements what Monologue calls "deep context", where it post-processes the raw transcription with context from your currently open window.

This fixes misspelled names if you're replying to an email / makes sure technical terms are spelled right / etc.

The original hope for FreeFlow was for it to use all local models like Handy does, but with the post-processing step the pipeline took 5-10 seconds instead of <1 second with Groq.

There's an open PR in the repo which will be merged which adds this support. Post processing is an optional feature if you want to use it, and when using it, end to end latency can still be under 3 seconds easily

That’s awesome! The specific thing that was causing the long latency was the image LLM call to describe the current context. I’m not sure if you’ve tested Handy’s post-processing with images or if there’s a technique to get image calls to be faster locally.

Thank you for making Handy! It looks amazing and I wish I found it before making FreeFlow.

Could you go into a little more detail about the deep context - what does it grab, and which model is used to process it? Are you also using a groq model for the transcription?

It takes a screenshot of the current window and sends it to Llama in Groq asking it to describe what you’re doing and pull out any key info like names with spelling.

You can go to Settings > Run Logs in FreeFlow to see the full pipeline ran on each request with the exact prompt and LLM response to see exactly what is sent / returned.

You can try ottex for this use case - it has both context capture (app screenshots), native LLMs support, meaning it can send audio AND screenshot directly to gemini 3 flash to produce the bespoke result.

As a very happy Handy user, it doesn't do that indeed. It will be interesting to see if it works better, I'll give FreeFlow a shot, thanks!

I didn't try Handy but been using Whisper-Key its super simple get out of your way all local single file executable (portable so zero install too) -- thats for Windows idk about the Mac version

[1] https://github.com/PinW/whisper-key-local

the astroturfing here off topic of op post is unbearable

Not sure if it's just me but Handy crashes on my Arch setup. Never mind which version I run. Could be something with Wayland or Pipewire but didn't see anything obvious in the logs.

https://github.com/goodroot/hyprwhspr have you tried this? I have a nice 64GB new linux machine waiting to be set up for me to kick the tires on this

pretty sure it's awesome - sorry OP about mentioning another project, we're all learning here :)

Thanks, will take a look.

Handy's great! I find the latency to be just a bit too much for my taste. Like half the people on this thread, built my own but with a bit more emphasis on speed

https://usetalkie.com

Thanks for the recommendation! I picked the smallest model (Moonshine Base @ 58MB), and it works great for transcribing English.

Surprisingly, it produced a better output (at least I liked its version) than the recommended but heavy model (Parakeet V3 @ 478 MB).

Great feedback :) also support for the v2 versions of the moonshine models should be out today!

Handy rocks. I recently had minor surgery on my shoulder that required me to be in a sling for about a month, and I thought I'd give Handy a try for dictating notes and so on. It works phenomenally well for most text-to-speech use cases - homonyms included.

Yes, I also use Handy. It supports local transcription via Nvidia Parakeet TDT2, which is extremely fast and accurate. I also use gemini 2.5 flash lite for post-processing via the free AI studio API (post-processing is optional and can also use a locally-hosted LM).

Handy is genuinely great and it supports Parakeet V3. It’s starting to change how I "type" on my computer.

Handy is nothing short of fantastic, really brilliant when combined with Parakeet v2!

I use handy as well, and love it.