Solving ODE using Deep Learning
9 views (last 30 days)
Show older comments
Hi all,
I am trying to understand how to solve ODE using Deep Learning, and code it using MATLAB, based on this tutorial:
When I modified the code to solve a Lotka-Volterra model:
I could not get the loss to converge. I think it is because the tutorial uses sgdmupdate optimizer. If I want to change it to adam optimizer, how can I change the code?
0 Comments
Answers (1)
Antoni Woss
on 14 Sep 2023
To use the adam optimizer in this custom training loop example, you can follow the example set out in the documentation page for the adamupdate function - https://uk.mathworks.com/help/deeplearning/ref/adamupdate.html.
Note that the adamupdate function has some different required input arguments and return arguments so you will need to map the differences to the ODE example you are trying to solve. For example, initializing empty averageGrad and averageSqGrad outside the custom training loop so that you can update it at each call to adamupdate. Here is a snippet just showing where these quantites would be used.
averageGrad = [];
averageSqGrad = [];
...
[net,averageGrad,averageSqGrad] = adamupdate(net,gradients,averageGrad,averageSqGrad,iteration);
0 Comments
See Also
Categories
Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!