Yes indeed there is, run `llmpm serve <model_name>`, which will expose an API endpoint http://localhost:8080/v1/chat/completions & also host a chat UI where you can interact with the local running model https://localhost:8080/chat.

Follow the docs here: https://www.llmpm.co/docs

Pro tip for your use case: Checkout the `llmpm serve` section