How do you know which question should be answered with 'I dont know?'. There are obvious questions which have no answer, but if only those are in the dataset, the model will answer I dont know only for unreasonable questions.

To train this effectively you would need a dataset of questions which you know the model doesn't know. But if you have that... why not answer the question and put in the dataset so that the model will know ?

That's a bit imprecise, but I think it capture the idea of why 'I don't know' answers are harder to train.

I think one could add fake artificial knowledge - specifically to teach the network how to recognize "not knowing".

I hear the Epistemology Klaxon sounding, far in the distance...

but you just described how to fix the "i don't know" problems to "i know and the answer is <>". but not that "i don't know" is inherently hard to solve for some reason.

It's difficult to fix because the incentive is to make sure it has the answer, not to give it lots of questions to which there are known answers but have it answer "I don't know" (if you did that, you'd bias the model to be unable to answer those specific questions). Ergo, in inference, on questions not in the dataset, it's more inclined to make up an answer because it has very few "I don't know" samples in general.

Maybe it was trained on the 1980's Nickelodeon show "You Can't Do That On Television".

https://www.youtube.com/watch?v=eWiG3LirUDk