Semantic Segmentation Issue with output size

2 views (last 30 days)
Hello everyone,
I am trying to do a segmentation of a Brain Tumor MRI dataset, available in BRATS. But, after I ran my code, I got an error.
"Error using trainNetwork (line 140) Invalid training data. The output size (4) of the last layer doesn't match the number of classes (4).
Error in import_data_alternate2 (line 102) cnn = trainNetwork(trainingData,net,options);
Caused by: Error using nnet.internal.cnn.util.TrainNetworkDataValidator/assertCorrectResponseSizeForOutputLayer (line 217) Invalid training data. The output size (4) of the last layer doesn't match the number of classes (4)."
clear;
clc;
%Image dataset
pxl = dir('C:\Users\Osvaldo\Downloads\BRATS Data\Imagens\Patient 07\GT\*.png')';
img = fullfile('C:\Users\Osvaldo\Downloads\BRATS Data\Imagens\Patient 07\T1c\');
%Vector preallocation
ground_truth = cell(1,numel(pxl));
gt = cell(1,numel(pxl));
training_data = imageDatastore(img);
%Ground truth images
for k = 1:numel(pxl)
image = imageDatastore(pxl(k).name);
ground_truth{k} = image;
end
for k = 1:numel(pxl)
loc = ground_truth{1,k}.Files;
gt(k) = loc;
end
gt = gt';
classes = ["Edema" "Non-enhancing tumor" "Necrosis" "Enhancing tumor"];
labelIDs = [ ...
127 127 127; ... % "Edema"
190 190 190; ... % "Non-enhancing tumor"
63 63 63; ... % "Necrosis"
255 255 255; % "Enhancing tumor"
];
groundtruth = pixelLabelDatastore(gt,classes,labelIDs);
%CNN creation
inputSize = [429 492 3];
imgLayer = imageInputLayer(inputSize);
filterSize = 3;
numFilters = 32;
conv = convolution2dLayer(filterSize,numFilters,'Padding',1);
relu = reluLayer();
poolSize = 2;
maxPoolDownsample2x = maxPooling2dLayer(poolSize,'Stride',2);
downsamplingLayers = [
conv
relu
maxPoolDownsample2x
conv
relu
maxPoolDownsample2x
conv
relu
maxPoolDownsample2x
];
filterSize = 4;
transposedConvUpsample2x = transposedConv2dLayer(4,numFilters,'Stride',2,'Cropping',1);
upsamplingLayers = [
transposedConvUpsample2x
relu
transposedConvUpsample2x
relu
];
numClasses = 4;
conv1x1 = convolution2dLayer(1,numClasses);
finalLayers = [
conv1x1
softmaxLayer()
pixelClassificationLayer()
];
net = [
imgLayer
downsamplingLayers
upsamplingLayers
finalLayers
];
%CNN Training
trainingData = pixelLabelImageSource(training_data,groundtruth);
options = trainingOptions('sgdm', ...
'InitialLearnRate', 1e-3, ...
'MaxEpochs', 100, ...
'MiniBatchSize', 64);
cnn = trainNetwork(trainingData,net,options);
Can someone help me?

Accepted Answer

Arthur Fernandes
Arthur Fernandes on 7 Nov 2017
Hi Gabriel,
Apparently there is an difference between the output size and your classes. It is difficult to debug that since matlab doesn't explicit say the size of the output matrix... For your model to work you need to have the output at the same size as the ground truth. However, since you want to do segmentation, a better way to approach this in matlab is to use the function segnetLayers since it will ensure that the output is in accordance to the ground-truth and will automatically define the number of nodes in each layer. But you still have the flexibility to define the architecture of your network.
  2 Comments
Gabriel VH
Gabriel VH on 28 Nov 2017
You, mister, are a genius! Thank you very much!
Farrukh nazir
Farrukh nazir on 15 Aug 2020
Edited: Farrukh nazir on 15 Aug 2020
Yes, i was also receiving error between output size and network size using u-net. However using segnet, netwotk just started training.
Thanks.

Sign in to comment.

More Answers (1)

abdulkader helwan
abdulkader helwan on 25 Dec 2017
numClasses = numel(categories(trainDigitData.Labels)); Then use this variable in the fully connected layer:
fullyConnectedLayer(numClasses).

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!