When use K-cross validation concept in pretrained CNN model to classify images, the dataset is split to k folds for example folde=5. Which means the for each fold we have 80% of the dataset used for training the model.
My question is about the rest 20% of the dataset in each fold. Is this rest part of the dataset is used for testing (to evaluate the classifier) in each fold? or it is used to validate the algorithm in each fold?
Actually I asked this question because some of researchers use K- cross validation to train (k-1) parts of the dataset and the rest part to validate the algorithm in each fold. while other researchers use the K- cross validation to test rest part of the dataset in each fold.
I need to use K-cross validation in my research but I confused about using this dataset part. Shall I use it for testing in each fold or use it to validate the algorithm?
As you know there is a big difference between validate dataset and testing dataset.
Thank you very Much
Your Rapid response is highly appreciated