I work as a SAP Integration consultant and built this as a side project. Friction point: Most self hosted LLM observability tools require Postgres, Redis and non trivial infrastructure. Teams just want to see what their agents are actually doing in Production, that set up cost discorages adoption. Torrix runs as a single docker contained backed by SQLite. The full install is:

curl -o docker-compose.yml https://raw.githubusercontent.com/torrix-ai/install/main/doc... docker compose up

No external dependencies. All data stays in a local SQLite file on your machine.

It logs LLM calls through a HTTP proxy or a python/Node SDK : tokens, cost, latency, full prompt and response traces, reasoning token capture. Works with OpenAI, Anthropic, Gemini, Groq, Mistral, Azure Open AI and any Apen AI compatible end point.

Things I added as I actually used it on real agent pipelines: cost forecasting and hard budget caps, PII masking, model routing rules, evals with golden runs, AI judge, a prompt library with version history, run tags for filtering by environment, MCP server so AI Assistants can query your own logs and OTLP/HTTP ingestion for apps aöready using OpenTelemetry.

Community edition is free for one user with 7-day retention. Pro adds teams, RBAC, 30 day retention, API key management, full text search and audit logs.

SQLite doesn't scale to high write throughput. This is aimed at teams logging hundreds to low thousands of LLM calls per day, not millions. Happy to hear what people think and what is missing.

GitHub / install: https://github.com/torrix-ai/install Website: https://www.torrix.ai

This is really cool! Since this can be hosted directly in my local machine, I would love to give it a try. I use gpt models a lot. Would let you know how it goes.