Thanks for the feedback! When you say “custom,” do you mean additional integrations with LLM providers, or more documentation on how to build your own custom integration?
If you mean the former, we’re currently focused on stabilizing the API and reaching feature parity with FoundationModels (e.g., adding streaming). After that, we plan to add more integrations, such as Claude, Gemini, and on-device LLMs from Hugging Face.
There is no examples or documentation on `CustomLLM` the README file has examples on `SystemLLM` and `OpenaiLLM` but there's no way for us to know if we need to bring in guff files, ollama, hugginface, etc.
Thanks for the feedback! When you say “custom,” do you mean additional integrations with LLM providers, or more documentation on how to build your own custom integration? If you mean the former, we’re currently focused on stabilizing the API and reaching feature parity with FoundationModels (e.g., adding streaming). After that, we plan to add more integrations, such as Claude, Gemini, and on-device LLMs from Hugging Face.
There is no examples or documentation on `CustomLLM` the README file has examples on `SystemLLM` and `OpenaiLLM` but there's no way for us to know if we need to bring in guff files, ollama, hugginface, etc.