MATLAB Answers

How to extract partial derivatives of some specific layer in the back-propagation of a deep learning model?

26 views (last 30 days)
SC
SC on 22 Nov 2019
Commented: SC on 27 Nov 2019
Say I have a deep learning model, and after training I call this model net.
When I input some images into net, I want to have the partial derivatives , where h are the outputs of the relu1 layer (i.e. ) and θ are the parameters of all trainable weights of the layers before relu1.
You can see that h (i.e. the output of relu1) will have a size of . I write the size of the training weights before relu1 as , where would be the set of all trainable parameters of the layers before relu1. Therefore should have the size of .
How can I get in the code? Many thanks!
My current code
%% Load Data
digitDatasetPath = fullfile(matlabroot,'toolbox','nnet','nndemos', ...
'nndatasets','DigitDataset');
imds = imageDatastore(digitDatasetPath, ...
'IncludeSubfolders',true, ...
'LabelSource','foldernames');
numTrainFiles = 50;
[imdsTrain,imdsValidation] = splitEachLabel(imds,numTrainFiles,'randomize');
%% Define Network Architecture
inputSize = [28 28 1];
numClasses = 10;
layers = [
imageInputLayer(inputSize)
convolution2dLayer(5,20,'Name','conv1')
batchNormalizationLayer('Name','bn1')
reluLayer('Name','relu1')
fullyConnectedLayer(numClasses,'Name','fc2')
softmaxLayer('Name','softmax')
classificationLayer];
%% Train Network
options = trainingOptions('sgdm', ...
'MaxEpochs',4, ...
'ValidationData',imdsValidation, ...
'ValidationFrequency',30, ...
'Verbose',false, ...
'Plots','training-progress');
net = trainNetwork(imdsTrain,layers,options);

  0 Comments

Sign in to comment.

Answers (1)

Dinesh Yadav
Dinesh Yadav on 26 Nov 2019
Hi
Kindly go through the following link and examples in it.
After the reluLayer command you can use dlgradient to compute partial derivatives on the outputs of relu layer.
Hope it helps.

  3 Comments

SC
SC on 27 Nov 2019
Hi,
Thanks for your reply. But it seems that y in dlgradient() needs to be a scalar.
Below doesn't work for me and I got "Value to differentiate must be a traced dlarray scalar".
Is there a way to get dydx without using loop(s)? Since the output of a layer have high dimension, I need to do dlgradient([y1 y2],x) without using a loop.
Thanks
x0 = dlarray([-1,2]);
[gradval] = dlfeval(@rosenbrock,x0)
function [dydx] = rosenbrock(x)
y1 = 100*(x(2) - x(1).^2).^2 + (1 - x(1)).^2;
y2 = 0.3*(100*(x(2) - x(1).^2).^2 + (1 - x(1)).^2);
dydx = dlgradient([y1 y2],x);
end
Dinesh Yadav
Dinesh Yadav on 27 Nov 2019
I dont think there is a way to do it with dlgradient without using loops . If you want to do it without using loops you will have to write your own custom gradient function.
SC
SC on 27 Nov 2019
I think something like the jacobian() would help.
jacobian() works for the simple rosenbrock() case, but I don't think it works for the deep learning objects...
I will use a for-loop with dlgradient() then (but I need to loop over 10000+ times since the output size of my desired layer is over 10000, so it will waste many computed re-usable values in the back-propagation and become much more time comsuming than the theorectical computational time). Thank you for your help.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!