That might be exactly how they're using it. A lot of my LLM use is really just having it write something I would have spent a long time typing out and making a few edits to it.
Once I get into stuff I haven't worked out how to do yet, the LLM often doesn't really know either unless I can work it out myself and explain it first.
That rubber duck is a valid workflow. Keep iterating at how you want to explain something until the LLM can echo back (and expand upon) whatever the hell you are trying to get out of your head.
Sometimes I’ll do five or six edits to a single prompt to get the LLM to echo back something that sounds right. That refinement really helps clarify my thinking.
…it’s also dangerous if you aren’t careful because you are basically trying to get the model to agree with you and go along with whatever you are saying. Gotta be careful to not let the model jerk you off too hard!
Yes, I have had times where I realised after a while that my proposed approach would never actually work because of some overlooked high-level issue, but the LLM never spots that kind of thing and just happily keeps trying.
Maybe that's a good thing - if it could think that well, what would I be contributing?