> Local Setup > Prerequisites > OpenAI API Key

This does not seem to be local and additionally appears to be tied to one SaaS LLM provider?

Hey we are actively working on improving support for Llama models. At the moment, CORE does not provide optimal results with Llama-based models, but we are making progress to ensure better compatibility and output in the near future.

Also we build core first internally for our main project SOL - AI personal assistant. Along the journey of building a better memory for our assistant we realised it's importance and are of the opinion that memory should not be vendor locked. It should be pluggable and belong to the user. Hence build it as a separate service.

I definitely would not recommend llama models, they were mostly outdated by the time they released, but the likes of Qwen, deepseek etc are much more useful.

Hey we started with llama but since llama was not giving good results hence fall backed to using gpt and launch it.

We will evaluate qwen and deepseek going forward, thanks for mentioning.

You can run OpenAI compatible servers locally. vLLM, ollama, LMStudio and others.