How to save an rl agent after every 1000 episodes?

16 views (last 30 days)
I am training a DDPG agent where the training runs over 1000 episodes. To see how it evolves, I would like to save the agents after every 1000 episodes. As i see the options available in rlTrainingOptions, it is only possible to save every agent after a critical value. This slows down the training process significantly because saving every agent consumes a lot of time. Is there an efficient way to save the agents only after every 1000 episodes?
Thank you.
  1 Comment
Heesu Kim
Heesu Kim on 12 Mar 2021
Edited: Heesu Kim on 19 Mar 2021
I agree with this. I don't understand why it does not have the most useful option. And I'm disappointed that this question still doesn't have any answer.

Sign in to comment.

Accepted Answer

Madhav Thakker
Madhav Thakker on 19 Mar 2021
Hi Guru,
I understand currently in rlTrainingOptions, there is no option to save the agent after specific number of episodes. I have raised an enhancement request for the same and this might be considered in future releases.
Hope this helps.

More Answers (2)

Manuel Sebastian Rios Beltran
@Madhav Thakker But they did not do it :( a year later

Lance
Lance on 23 Jun 2023
Edited: Lance on 29 Jun 2023
From what I understand, the only other work around would be to write another training command. You would have to predfine this for every "checkpoint" ie. 10,20,30 episodes. The training-progress graph will continue to be actively updated. (Note I am using R2022a)
% Define all agents, observations, actions, environment, etc....
maxepisodes=500;
trainingOpts=rlMultiAgentTrainingOptions;
trainingOpts.SaveAgentCriteria="EpisodeCount";
trainingOpts.SaveAgentValue=maxepisodes
trainingStats=train([agent1,agent2],environment,trainingOpts); % Will train to max episodes and save agent
% Edit Trainingoptions to increase maxepisodes and save agent value
trainingStats(1,1).TrainingOptions.MaxEpisodes=1000;
trainingStats(1,1).TrainingOptions.SaveAgentValue=[1000,1000];
trainnigStats(1,1).TrainingOptions.StopTrainingValue=[1000,1000];
trainingStats(1,2).TrainingOptions.MaxEpisodes=1000;
trainingStats(1,2).TrainingOptions.SaveAgentValue=[1000,1000];
trainnigStats(1,2).TrainingOptions.StopTrainingValue=[1000,1000];
% Resume training -- Will train to 1000 episodes and save agent again
trainingStats2=train([agent1,agent2],environment,trainingStats) %Note you use trainingStats here not trainingOpts
Let me know if this helps!

Products


Release

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!