From LLM benchmarks it looks like it's better to use open source uzu than RunAnywhere's proprietary inference engine.
[0] https://github.com/trymirai/uzu