I agree, it can be incredibly frustrating at times. My rule is that if it “compiles” in my brain as an understood idea then i accept it. I also push back a lot (sometimes it points out good errors in my thinking, sometimes it admits it hallucinated). Real humans hallucinate a lot as well or confidently state subtly wrong ideas, it’s a good habit anyway. It’s basically the same approach when presented with a “formula” for something in school. If i dont know how to derive/prove it then i dont accept it as part of my memorized or accepted toolkit/things i use (and try to forget it). If it fits with the rest of my network of understood ideas i do. It’s annoying but still more time efficient than trawling through lecture slides with domain specific language etc
> Real humans hallucinate a lot as well or confidently state subtly wrong ideas, it’s a good habit anyway.
I think that's actually deeply different. If a human keeps on apologizing because they are being caught in a lie, or just a mistake, you distrust them a LOT more. It's not normal to shrug off a problem then REPEAT it.
I imagine the cost of a mistake is exponential, not linear. So when somebody says "oops, you got me there!" I don't mistrust them just marginally more, I distrust them a LOT more and it will take a ton of effort, if even feasible, to get back to the initial level of trust.
I do not think it's at all equivalent to what "Real humans" do. Yes, we do mistake, but the humans you trust and want to partner with are precisely the one who are accountable when they make mistakes.
>Real humans hallucinate..
You seem to have a different understanding of what it means in the context of neural networks.
Real humans will not make up non existent api and implement a solution with it, (unless they do it on purpose).