I would like to add support, but I do not have a computer powerful enough to run an LLM fast enough, so I am not able to test.

Is it possible to use an OpenAI-compatible API locally, or how does that work?

https://github.com/simonw/llm proposes some hints to run in local