i don't think this is correct - such training data is usually made at SFT level after unsupervised learning on all available data in the web. the SFT level dataset is manually curated meaning there would be conscious effort to create more training samples of the form to say "i'm not sure". same with RLHF.
You mean I don't think this is automatically correct. Otherwise it very likely is correct. Either way, you're guessing the manual curation is done in a way that is favorable to include I don't know answers. Which it most likely doesn't.
Having done contract work on SFT datasets, at least one major provider absolutely includes don't know answers of different varieties.
I don't know why you assume it's a guess. These providers employ thousands of people directly or via a number of intermediaries to work on their SFT datasets.
its completely in the incentive to include such examples in RLHF. or you have come up with a way to increase performance that the very employees haven't. why do you think they didn't try it?
How do you know which question should be answered with 'I dont know?'. There are obvious questions which have no answer, but if only those are in the dataset, the model will answer I dont know only for unreasonable questions.
To train this effectively you would need a dataset of questions which you know the model doesn't know. But if you have that... why not answer the question and put in the dataset so that the model will know ?
That's a bit imprecise, but I think it capture the idea of why 'I don't know' answers are harder to train.
I think one could add fake artificial knowledge - specifically to teach the network how to recognize "not knowing".
I hear the Epistemology Klaxon sounding, far in the distance...
but you just described how to fix the "i don't know" problems to "i know and the answer is <>". but not that "i don't know" is inherently hard to solve for some reason.
It's difficult to fix because the incentive is to make sure it has the answer, not to give it lots of questions to which there are known answers but have it answer "I don't know" (if you did that, you'd bias the model to be unable to answer those specific questions). Ergo, in inference, on questions not in the dataset, it's more inclined to make up an answer because it has very few "I don't know" samples in general.
Maybe it was trained on the 1980's Nickelodeon show "You Can't Do That On Television".
https://www.youtube.com/watch?v=eWiG3LirUDk