> Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.

I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?

It isn't valuable if you generate and toss it over the fence. Where the value comes in is when the team verifies the content. Once that's done and corrections made, the words have the assurance that they match the code.

If you aren't willing to put in the time to verify it works than it is indeed, no more useful than anyone else doing the same task on their own.

Having used AI to write docs before, the value is in the guidance and review.

I started out with telling the AI common issues that people get wrong and gave it the code. Then I read (not skim, not speed, actually read and think) the entire thing and asked for changes. Then repeat the read everything, think, ask for changes loop until it’s correct which took about 10 iterations (most of a day).

I suspect the AI would have provided zero benefit to someone who is good at technical writing, but I am bad at writing long documents for humans so likely would just not have done it without the assistance.