How does NN prevent completion of training by Mu value?

26 views (last 30 days)
When you do NN, the situation is that the training is completed by Mu value. How can I prevent it?
  1 Comment
Noor Thamir
Noor Thamir on 17 May 2022
Hello Doctors:
please, I need clarification regarding the parameters in the attached image: performance, gradient, mu, and validation check.

Sign in to comment.

Accepted Answer

Amy
Amy on 11 Aug 2017
Hi Keonghwan,
The MU value is used to control the weights of the neurons updating process (back propagation) during training. If your training stops with the message "Maximum MU reached", it is a sign that additional training will not improve learning.
If you are reaching a maximum MU value too quickly, it might be because the data set you are using for training is too complex for the number of neurons in your network to successfully model. You can increase the number of neurons by increasing the value of 'hiddenLayerSize' in the script you used for training.
It is possible to set your own initial and maximum MU values for the 'trainlm' training function. See http://www.mathworks.com/help/nnet/ref/trainlm.html for information on the initial parameters and how they are updated.
You might also find the discussions of regularization and early stopping on this page helpful: http://www.mathworks.com/help/nnet/ug/improve-neural-network-generalization-and-avoid-overfitting.html
  1 Comment
Chris P
Chris P on 9 Aug 2020
Edited: Chris P on 9 Aug 2020
Finally, a helpful answer to a neural network-related question. A lot better than most of the answers. Not going to name any names but you all know who I'm talking about.

Sign in to comment.

More Answers (0)

Categories

Find more on 시계열, 시퀀스 및 텍스트에서의 딥러닝 in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!