Clear Filters
Clear Filters

Insuring reproducibility in training YOLOv2 in the Deep Learning Toolbox

1 view (last 30 days)
I'm using the YOLOv2 network in the Deep Learning Toolbox. We are seeing significant variations in testing results running the same training code more than once.
Is it possible to insure reproducibility in training? If so, what options/flags would need to be set to insure reproducible training?
One option I see already is to set the "Shuffle" option to "none" (its default is "once").
But are there other flags/random seeds that I need to set to insure repeatability?
Mohammad Sami
Mohammad Sami on 30 Apr 2020
Edited: Mohammad Sami on 30 Apr 2020
You can try using rng with a seed as the first step.
I could not find a direct documentation for the training deep learning models, but i am assuming that this applies to training deep learning models as well.

Sign in to comment.

Answers (1)

Ryan Comeau
Ryan Comeau on 10 May 2020
What you are experiencing is very normal for deep learning. The process of network initialization involves assigning initial weights to each of your layers and activation functions. These initial weights can be fixed by fixing the random seed for initialization as mentioned in the comments above. This may not resolve your problem however. The algorithm which minimizes your loss function is called stochastic gradient descent. A stochastic gradient descent is by definition not deterministic, which means there will always be some variance in your results. This should be seen as a good thing however, we don't want to get stuck in a local minima, which is likely to occurr if our algorithm was deterministic.
If you want to see the performance of deep learning being as deterministic as possible, set the mini batch size to 1. This will remove the ability to not get stuck in local minima and you will see a drop in performance.
The shuffle option you are describing is to shuffle the order of data so that your mini-batches do not always have the same data in them.
Lastly, if you do want to have "consistent" training results, simply redefine what consistent means in this case. Run your training 10 times and the results which occurrs the most frequently will be your replicable results.
Hope this helps,




Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!