How to assign my defined training set, validation set and test set for training a Neural net in NeuralNetwork toolbox and not relying on GUI functionalities ?

5 views (last 30 days)
To find an optimal number of hidden neurons and layers in my code using feedforward net, I use cross validation technique and cvpartition function to split data. Now my aim is to use this split data of train,Val and test for generating a good neural network, may I know how would it be possible to bypass the options of toolbox and force set our datasets. As I have already normalised the dataset. My code looks like the following
output = voltage_norm;
input = data_KC_ip_norm1;
numInst = length(output); %size of dataset
%******************************************************
%# Split training/testing
idx = randperm(numInst);
numTrain = round(numInst*0.80);
numTest = numInst - numTrain;
trainData = input(idx(1:numTrain),:);
testData = input(idx(numTrain+1:end),:);
trainLabel = output(idx(1:numTrain));
testLabel = output(idx(numTrain+1:end));
%*********************************************************
param.hset = 3:12; %Number of hidden neurons
%considered structure to include further parameters that help in getting optimized network
%*********************************************************
cv = cvpartition(trainLabel,'kfold',5);
Rval = zeros(length(param.hset));
rmse_val = zeros(length(param.hset));
uint64 tStart_cvtrain;uint64 tElapsed_cv;
tStart_cvtrain = zeros(length(param.hset));
tElapsed_cv = zeros(length(param.hset));
tic;
for i = 1:cv.NumTestSets
trIdx = cv.training(i);
teIdx = cv.test(i);
%learnData=trainData(trIdx,:);valData = trainData(teIdx,:);
%learnLabel = trainLabel(trIdx,:);valLabel = trainLabel(teIdx,:);
for k = 1:length(param.hset)
tStart_cvtrain(j,k) = tic;
param.h = param.hset(k);
% build model on Learning data
hiddenLayerSize = param.h;
net = feedforwardnet(hiddenLayerSize);
net.trainParam.showWindow = false;
net.trainParam.showCommandLine = true ;
%trainInd,valInd,testInd
net.divideParam.trainInd = trIdx;
net.divideParam.valInd = teIdx;
net.divideParam.testInd = idx(numTrain+1:end);
[net,tr] = train(net,input,output);
perf = perform(net,t,y);
% predict on the validation data
y_hat = net(valData);
Rval(j,k) = Rval(j,k) + mean((y_hat-valLabel).^2);
tElapsed_cv(j,k) = toc(uint64(tStart_cvtrain(j,k)));
end
end
averageTime_CV = toc/cv.NumTestSets;
Rval = Rval ./ (cv.NumTestSets);
[v1, i1] = min(Rval);
optparam = param;
optparam.h = param.hset( i1);
Also i would like to know how to choose best option number of layers, and if also anyone could help me if there are any other parameters that could fetch me optimized results. Help of any kind shall be greatly appreciated.

Answers (2)

Greg Heath
Greg Heath on 27 Apr 2016
I have devised an easy way to minimize the number of hidden nodes and layers (to prevent over-fitting) subject to additional constraints on overtraining and maximum training error.
However, instead of strict cross-validation, data-division as well as initial weights is random.
I have posted zillions of examples in the NEWSGROUP and on ANSWERS.
Search using
Ntrials
or
Ntrials Greg.
I have also posted on strict cross-validation. However, since it was only one or two posts, I will find the references.
Hope this will be of some help.
Greg
  3 Comments
Srikar
Srikar on 28 Apr 2016
Dear Greg,
Thanks for the reply, yes I have checked the tutorials, but I am not clear exactly the necessity of MSE biased and unbiased, could you lend some more explanation over this. Sorry I am quite new to this area, hence finding difficulty in understanding.
Thanks again Professor.
Greg Heath
Greg Heath on 28 Apr 2016
Whenever you use a data subset to create a function that will be used on similar, but different data, using that function on training data, will in general, yield an optimistically biased result of performance on nontraining data.
That is EXACTLY why validation and test subsets are used.
If you do not have validation and test subsets, the training data estimate is not reliable unless the number of equations is considerable larger than the number of unknown weights.
To mitigate the bias, the number of degrees of freedom, Ndof = Ntrn-Nw, replaces Ntrn, the number of input/target training examples when estimating the average performance.
https://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)
Hope this helps.
Greg

Sign in to comment.


Greg Heath
Greg Heath on 28 Apr 2016
Whenever you use a data subset to create a function that will be used on similar, but different data, using that function on training data, will in general, yield an optimistically biased result of performance on nontraining data.
That is EXACTLY why validation and test subsets are used.
If you do not have validation and test subsets, the training data estimate is not reliable unless the number of equations is considerable larger than the number of unknown weights.
To mitigate the bias, the number of degrees of freedom, Ndof = Ntrn-Nw, replaces Ntrn, the number of input/target training examples when estimating the average performance.
https://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics)
Hope this helps.
Greg
  2 Comments
SayedPedram Hosseini
SayedPedram Hosseini on 4 May 2020
Hi Greg,
I have the same problem as written, but Newsreader is no longer available and therefore the Links are not working. I'd be grateful if you could send me, if you have saved them or you have some similar tutorial materials.
Regards,

Sign in to comment.

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!