I don’t think this specific approach would wish to well (you’re training the network to answer ‘dunno’ to that question, not to questions it can’t answer) but I think you’ve got the right general idea.

I’d try adding an output (or some special tokens or whatever) and then train it to track the current training loss for the current sample. Hopefully during inference this output would indicate how out-of-distribution the current inputs are.