Most of these are new features, but then they have to integrate with the existing software so it's not really greenfield. (Not to mention that our clients aren't getting any faster at approving new features, either.)
Most of these are new features, but then they have to integrate with the existing software so it's not really greenfield. (Not to mention that our clients aren't getting any faster at approving new features, either.)
Did you train a self-hosted/open source LLM on your existing software and documentation? That should make it far more useful. It's not claude code, but some of those models are 80% there. In 6 months they'll be today's claude code.
What would that help us with?
The LLM needs to understand your existing codebase if it's going to be useful building features that integrate with said codebase seamlessly without breaking things or assuming things that don't exist. That's not something you want to give away to a private AI company, so self-host an open source model.
I understood how that would help the LLM, I asked how that would help us. What decision would that inform for us that we're currently having trouble with?
What "decision" are you talking about? You said you don't code, but you administer and maintain code. It sounds like you integrate new features into an existing codebase?