This isn't something I know about, but I would think that for a classification problem, if the right loss function is used then the NN should learn to maximise the chance that the true answer is within the intervals it provides. However...
> If we test the net and find such a confidence associated with an incorrect classification more frequently than predicted then we can say that the confidence is objectively incorrect.
On the test data performance is never going to match the NN's training set performance, so it would always overestimate its confidence.
> If we test the net and find such a confidence associated with an incorrect classification more frequently than predicted then we can say that the confidence is objectively incorrect.
On the test data performance is never going to match the NN's training set performance, so it would always overestimate its confidence.