The core argument here is: LLM docbots are wrong sometimes. Docs are not. That's not acceptable.

But that's not true! Docs are sometimes wrong, and even more so if you could errors of omission. From a users perspective, dense / poorly structured docs are wrong, because they lead users to think the docs don't have the answer. If they're confusing enough, they may even mislead users.

There's always an error rate. DocBots are almost certainly wrong more frequently, but they're also almost certainly much much faster than reading the docs. Given that the standard recommendation is to test your code before jamming it in production, that seems like a reasonable tradeoff.

YMMV!

(One level down: the feedback loop for getting docbots corrected is _far_ worse. You can complain to support that the docs are wrong, and most orgs will at least try to fix it. We, as an industry, are not fully confident in how to fix a wrong LLM response reliably in the same way.)

> There's always an error rate. DocBots are almost certainly wrong more frequently, but they're also almost certainly much much faster than reading the docs.

A lot of the discourse around LLM tooling right now boils down to "it's ok to be a bit wrong if you're wrong quickly" ... and then what follows is an ever-further bounds-pushing on how big "a bit" can be.

The promise of AI is "human-level (or greater)" --- we should only be using AI when it's as accurate (or more accurate) as human-generated docs, but the tech simply isn't there yet.

Docs are reliably fixable, so with enough effort they will converge to correctness. Doc bots are not and will not.