> There's always an error rate. DocBots are almost certainly wrong more frequently, but they're also almost certainly much much faster than reading the docs.
A lot of the discourse around LLM tooling right now boils down to "it's ok to be a bit wrong if you're wrong quickly" ... and then what follows is an ever-further bounds-pushing on how big "a bit" can be.
The promise of AI is "human-level (or greater)" --- we should only be using AI when it's as accurate (or more accurate) as human-generated docs, but the tech simply isn't there yet.