No, they're like a completely nontechnical marketing person who has a big library of papers on related subjects and who's been asked to generate a whitepaper by pulling phrases. What comes out will probably have proper grammar and seem perfectly reasonable to another person with no knowledge of the field, but if actually read by someone knowledgeable may be complete gibberish taken as a whole.

Individual sentences and paragraphs may mostly work, but it's an edifice built on sand out of poorly constructed bricks plus mortar with the wrong proportions (or entirely wrong ingredients).

LLM output is "truthy" - it looks like it might be true, and sometimes it will even be accurate (see also, "stopped clock") but depending on it is foolish because what's generating the output doesn't actually understand what it's putting out - it's just generating output that looks like the kind of thing you've requested.