Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
> The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.
I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
> But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
I assume this is satire.
> Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
Having used AI to write docs before, the value is in the guidance and review.
I started out with telling the AI common issues that people get wrong and gave it the code. Then I read (not skim, not speed, actually read and think) the entire thing and asked for changes. Then repeat the read everything, think, ask for changes loop until it’s correct which took about 10 iterations (most of a day).
I suspect the AI would have provided zero benefit to someone who is good at technical writing, but I am bad at writing long documents for humans so likely would just not have done it without the assistance.
It isn't valuable if you generate and toss it over the fence. Where the value comes in is when the team verifies the content. Once that's done and corrections made, the words have the assurance that they match the code.
If you aren't willing to put in the time to verify it works than it is indeed, no more useful than anyone else doing the same task on their own.
LLM-generated documentation is great for LLMs to read so they can code better and/or more efficiently. You can write it manually, but as I've discovered over the decades, humans rarely read documentation anyway. So you'll be spending a lot of time writing good for the bots.
This is odd to hear. All the best programmers I know are actually adamant readers of documentation, how else could you be a good programmer without reading the docs? I will say devs admitting to not reading docs does definitely explain how shit current software is from big tech.
Yesterday my manager sent LLM-generated code that did a thing. Of course I didn't read it, I only read Claude's summary of it. Then I died a little inside.
It was especially unfortunate because to do its thing, the code required a third party's own personal user credentials including MFA, which is a complete non-starter in server-side code, but apparently the manager's LLM wasn't aware enough to know that.