In learning curve, training error decrease with increase training datasize.
3 views (last 30 days)
Show older comments
Qiang Wang
on 2 Feb 2021
Answered: Shashank Gupta
on 5 Feb 2021
I’ve learned and observed that training loss / error increases with training data size as stated in Dr Andrew Ng’s ML course.
I’ve recently experienced an anomaly. Training error and Test error curves were decreasing while training data size was increasing.
is this normal?
Some post said because regularization. in my case I use trainbr :Bayesian regularization backpropagation
is this reason?
Thank you.
0 Comments
Accepted Answer
Shashank Gupta
on 5 Feb 2021
Hi,
These all figures boils down to number of learnable parameter v/s training data size. Regularization and all does have impact on the loss and yes it is possible that it might be the case. Also there are many other reasons, the graph which you plot describing the losses, are these optimal? does all hyperparameters are optimized properly? Prof. Andrew Ng talks about cases when optimality is reached. Now if you increase the training data. The optimal loss with same number of learnable parameter and more training data will be higher. It is a tradeoff. The explanation given in the link which you shared also make sense. There is no denial.
I hope my insight gave you enought help.
Cheers.
0 Comments
More Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!