Why my test accuracy higher than validation accuracy?

I am using classification learner app. I get test accuracy higher than validation accuracy. For example 94.61% Accuracy (Validation), 94.81% Accuracy (Test). I'm sure I've splitted the train and test sets correctly. Why is test accuracy higher? How can i solve this? I would be grateful if you help.

4 Comments

Would you agree that this is impossible for us to answer because we don't know anything about what you are doing in your code ?
Consider the two numbers are within 0.2% of each other. What are the odds, if you had split the sets differently (and also randomly), that you might have gotten a subtly different result?
Anyway, you build a model using the test set. It is optimized to fit that data as well as possible. Then you give it another set of data (the validation set), that was not used to build the model. I would expect this second set to fit at least a little more poorly. And that is what you seem to have observed. I'm not at all surprised. But again, the difference is a small one.
@John D'Errico, I assume that @Deren is breaking out into three datasets:
  • training -- to fit the model
  • validation -- to tune hyperparameters
  • test -- to evaluate the final model choice
(This oversimplified, for brevity.)
Typically, training performace > validation performance > test performace.
(Again, oversimplified for brevity.)
So, his result is slightly more surprising than the two-stage method you describe. (I expect he did not train on the test set, as you are describing.)
See my answer for my take on the whole thing, which is effectively the same as your broader point, which is that the difference is small and not surprising.
Ok. That makes sense. Regardless, the difference is tiny, and could easily have been the other way.

Sign in to comment.

 Accepted Answer

the cyclist
the cyclist on 29 Apr 2023
Edited: the cyclist on 29 Apr 2023
There is no mystery here. Although in general a classifier will perform a little less well on the test set, sampling error can lead to a "lucky" test set, and you end up classifying it better.
Think of it like this. Suppose your validation accuracy is 95%, and the true accuracy of your model is really only 93%. It is still the case that you could perform better on any given randomly drawn test case. You could even get 100% accuracy in the test set.
There is a nice, fairly comprehensive discussion of these points by Jason Brownlee. Quoting from Kuhn and Johnson (from that article): "The uncertainty of the test set can be considerably large to the point where different test sets may produce very different results."

More Answers (0)

Products

Release

R2022b

Asked:

on 29 Apr 2023

Commented:

on 29 Apr 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!