Different deep learning training behavior between MATLAB 2020a and 2021b
3 views (last 30 days)
Show older comments
I have been using this code to train semantic segmentation networks:
function train_deeplab(pth,classes,classNames,sz)
pthTrain=[pth,'training\'];
pthVal=[pth,'validation\'];
% make training datastore
Trainim=[pthTrain,'im\'];
Trainlabel=[pthTrain,'label\'];
imdsTrain = imageDatastore(Trainim);
pxdsTrain = pixelLabelDatastore(Trainlabel,classNames,classes);
pximdsTrain = pixelLabelImageDatastore(imdsTrain,pxdsTrain);
tbl = countEachLabel(pxdsTrain);
% make validation datastore
Valim=[pthVal,'im\'];
Vallabel=[pthVal,'label\'];
imdsVal = imageDatastore(Valim);
pxdsVal = pixelLabelDatastore(Vallabel,classNames,classes);
pximdsVal = pixelLabelImageDatastore(imdsVal,pxdsVal);
% set training options
options = trainingOptions('adam',...
'MaxEpochs',8,...
'MiniBatchSize',5,...
'Shuffle','every-epoch',...
'ValidationData',pximdsVal,...
'ValidationPatience',6,...
'InitialLearnRate',0.0005,...
'LearnRateSchedule','piecewise',...
'LearnRateDropPeriod',1,...
'LearnRateDropFactor',0.75,...
'ValidationFrequency',128,...
'ExecutionEnvironment','gpu',...
'Plots','training-progress',...
'OutputFcn', @(info)savetrainingplot(info,pth));
% design network
numclass = numel(classes);
imageFreq = tbl.PixelCount ./ tbl.ImagePixelCount;
classWeights = median(imageFreq) ./ imageFreq;
lgraph = deeplabv3plusLayers([sz sz 3],numclass,"resnet50");
pxLayer = pixelClassificationLayer('Name','labels','Classes',tbl.Name,'ClassWeights',classWeights);
lgraph = replaceLayer(lgraph,"classification",pxLayer);
% train
[net, info] = trainNetwork(pximdsTrain,lgraph,options);
save([pth,'net.mat'],'net','info');
end
% save a png of training progress when finished
function stop=savetrainingplot(info,pthSave)
stop=false;
if info.State=='done'
exportapp(findall(groot, 'Type', 'Figure'),[pthSave,'training_process_21.png'])
end
end
Since switching from MATLAB 2020a to 2021b, there is something strange happening with the validation loss. My training and validation accuracy are very similar, but my validation loss is orders of magnitude higher than training loss. Here I include a sample network trained with the code above using identical training & validation datasets in MATLAB 2020a vs 2021b to illustrate the problem.
Trained using MATLAB 2020a (training and validation loss/accuracy are similar):
Trained using MATLAB 2021b (validation loss is much higher than training loss while accuracies remain similar):
I appreciate any help!
0 Comments
Answers (1)
yanqi liu
on 26 Feb 2022
yes,sir,may be use rgn('default') or rand('seed', 0) to make same run environment
See Also
Categories
Find more on Image Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!