Designing and testing a neural network with random subsampling
Show older comments
I'm trying to build a regression model using neural networks.i have read a lot about model validation procedures and i realise that k fold cross validation is logically the best method and in order to select the model with the best hyper parameters( in this case number of hidden neurons) and evaluate the error for that specific model, i should use nested cross validations. but here is another alternative:
what if i randomly divide my dataset to training,validation and test sets, train a model with a specific number of hidden neurons using training and evaluate the performance using validation set. i repeat this process for like 5 or 10 times, each time with a random training and evaluation sets and calculate the average error of the evaluation set. then i'd repeat this process for other number of hidden neurons and select the architecture with minimum average error. Now in order to evaluate the model performance(errors), again i randomly divide data into training, validation and test, train the best model of the previous step from the scratch and calculate error for the test set. again this process would be repeated several times and each time i would train the model from scratch and test the model with some random examples from my dataset that the model hasn't seen. finally i would average the errors.
so if i follow the above process, are my results biased? i get it that this is not k fold cross validation but it is still better than one single data division, right?
Answers (0)
Categories
Find more on Function Approximation and Clustering in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!