Running a local LLM sounds like a solid use case, just not sure if it actually performs as well as the description claims.