> I mean, what's the point of using local models if you can't trust the app itself?

and you think ollama doesn't do telemetry/etc. just because it's open source?

You're welcome to go through the source: https://github.com/ollama/ollama/

That's why i suggested using llama.cpp in my other comment.