what do pruningActivations and pruningGradients refer to?

8 views (last 30 days)
In pruning using Taylor scores example, I can't understand what pruningActivations and pruningGradients refer to
[loss,pruningActivations, pruningGradients, netGradients, state] = ...
dlfeval(@modelLossPruning, prunableNet, X, T);
  4 Comments
Matt J
Matt J on 19 Apr 2025
Edited: Matt J on 19 Apr 2025
No, it's not an answer to your question. It's to tell you that no human would be able to understand the context of your question from just one line of code taken from somewhere in the documentation, so maybe you thought you were on a GPT engine. If you are looking for an AI interface, you can find one here,
Otherwise, providing a link to the full code and example would be a good start.

Sign in to comment.

Accepted Answer

Shantanu Dixit
Shantanu Dixit on 21 Apr 2025
Edited: Shantanu Dixit on 21 Apr 2025
Hi Muhamed, If I correctly understood the query you're referring to the example: https://www.mathworks.com/help/deeplearning/ug/prune-image-classification-network-using-taylor-scores.html which details on pruning networks for resource efficient inferencing.
In this context 'pruningActivations' refers to the outputs of the layers designated as prunable (ex: in convolution layers, each activation corresponds to the output of a filter), similarly 'pruningGradients' refers to the gradients of the loss with respect to the 'pruningActivations' (measures the sensitivity: larger the gradient, greater the impact on loss).
Correspondingly, 'Taylor Score' is computed as the element-wise product of 'pruningActivations' and 'pruningGradients' using 'updateScore'. Higher scores imply the activation and associated parameters is more critical (for more information you can refer to References Section: Article 2, Section 2.2 which details on 'criteria for pruning').
% Compute first-order Taylor scores and accumulate the score across
% previous mini-batches of data.
prunableNet = updateScore(prunableNet, pruningActivations, pruningGradients);
You can refer to the below pruning network related references for more information:
Pruning CNNs for resource efficient inference: Molchanov, Pavlo, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. “Pruning Convolutional Neural Networks for Resource Efficient Inference.” Preprint, submitted June 8, 2017. https://arxiv.org/abs/1611.06440.
Hope this helps.

More Answers (0)

Products


Release

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!