DNN training 3D Parameters

14 views (last 30 days)
Sia Sharma
Sia Sharma on 11 Apr 2024 at 17:30
Edited: Matt J on 11 Apr 2024 at 17:40
I am training a DNN on a small dataset of MRI images in 3D with a scratch network I created with 4 sets of convolutional layer, batch normalization + relu + max pooling, followed by a global average pooling and 2 fully connected layers with a dropout in between them. I am experiencing a lot of low accuracy for both my training and validation curves, and my loss curve does not decay and is more horizontal around 1. I have tried to use l2 regularization, change momentum, and add a learn rate drop factor but it doesn't improve the accuracy. This model worked well with 2D images, but I am unable to get an accuracy above 60% for my 3D network. Would be helpful to recieve some suggestions on what paramters I could try to change

Answers (1)

Matt J
Matt J on 11 Apr 2024 at 17:39
Edited: Matt J on 11 Apr 2024 at 17:40
The parameters you mention experimenting with do not include all the training options (see below for a more complete list). You could also try a different training algorithm, e.g., adam. Because it is a larger input/output dimension, you may also need to change the network architecture so that it has more weights to manipulate.
options = trainingOptions('adam', ...
'MiniBatchSize',5, ...
'MaxEpochs',100, ...
'InitialLearnRate',ilr, ...
'L2Regularization',1e-4,...
'LearnRateSchedule','piece', ...
'LearnRateDropFactor',0.8, ...
'LearnRateDropPeriod',5);

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!