Except it generally is shallow, for any advanced enough subject, and the scary part is you don't know when it's reached the limit of its knowledge because it'll come up with some hallucination to fill in those blanks.
If LLM's got better at just responding with: "I don't know", I'd have less of an issue.
I agree, but it's a known limitation. I've been duped a couple times, but I mostly can tell when it's full of shit.
Some topics you learn to beware and double check. Or ask it to cite sources. (For me, that's car repair. It's wrong a lot.)
I wish it had some kind of confidence level assessment or ability to realize it doesn't know, and I think it eventually will have that. Most humans I know are also very bad at that.