Main Content

Train Reinforcement Learning Agents

Once you have created an environment and reinforcement learning agent, you can train the agent in the environment using the train function. To configure your training, use an rlTrainingOptions object. For example, create a training option set opt, and train agent agent in environment env.

opt = rlTrainingOptions(...
    MaxEpisodes=1000,...
    MaxStepsPerEpisode=1000,...
    StopTrainingCriteria="AverageReward",...
    StopTrainingValue=480);
trainResults = train(agent,env,opt);

If env is a multi-agent environment created with rlSimulinkEnv, specify the agent argument as an array. The order of the agents in the array must match the agent order used to create env. Multi-agent training is not supported for MATLAB® environments.

For more information on creating agents, see Reinforcement Learning Agents. For more information on creating environments, see Reinforcement Learning Environments and Create Custom Simulink Environments.

Note

train updates the agent as training progresses. This is possible because each agent is an handle object. To preserve the original agent parameters for later use, save the agent to a MAT-file:

save("initialAgent.mat","agent")
If you copy the agent into a new variable, the new variable will also always point to the most recent agent version with updated parameters. For more information about handle objects, see Handle Object Behavior.

Training terminates automatically when the conditions you specify in the StopTrainingCriteria and StopTrainingValue options of your rlTrainingOptions object are satisfied. You can also terminate training before any termination condition is reached by clicking Stop Training in the Reinforcement Learning Episode Manager.

When training terminates the training statistics and results are stored in the trainResults object.

Because train updates the agent at the end of each episode, and because trainResults stores the last training results along with data to correctly recreate the training scenario and update the episode manager, you can later resume training from the exact point at which it stopped. To do so, at the command line, type:

trainResults = train(agent,env,trainResults);
This starts the training from the last values of the agent parameters and training results object obtained after the previous train call.

The trainResults object contains, as one of its properties, the rlTrainingOptions object opt specifying the training option set. Therefore, to restart the training with updated training options, first change the training options in trainResults using dot notation. If the maximum number of episodes was already reached in the previous training session, you must increase the maximum number of episodes.

For example, disable displaying the training progress on Episode Manager, enable the Verbose option to display training progress at the command line, change the maximum number of episodes to 2000, and then restart the training, returning a new trainResults object as output.

trainResults.TrainingOptions.MaxEpisodes = 2000;
trainResults.TrainingOptions.Plots = "none";
trainResults.TrainingOptions.Verbose = 1;
trainResultsNew = train(agent,env,trainResults);

Note

When training terminates, agents reflects the state of each agent at the end of the final training episode. The rewards obtained by the final agents are not necessarily the highest achieved during the training process, due to continuous exploration. To save agents during training, create an rlTrainingOptions object specifying the SaveAgentCriteria and SaveAgentValue properties and pass it to train as a trainOpts argument.

Training Algorithm

In general, training performs the following steps.

  1. Initialize the agent.

  2. For each episode:

    1. Reset the environment.

    2. Get the initial observation s0 from the environment.

    3. Compute the initial action a0 = μ(s0), where μ(s) is the current policy.

    4. Set the current action to the initial action (aa0), and set the current observation to the initial observation (ss0).

    5. While the episode is not finished or terminated, perform the following steps.

      1. Apply action a to the environment and obtain the next observation s''and the reward r.

      2. Learn from the experience set (s,a,r,s').

      3. Compute the next action a' = μ(s').

      4. Update the current action with the next action (aa') and update the current observation with the next observation (ss').

      5. Terminate the episode if the termination conditions defined in the environment are met.

  3. If the training termination condition is met, terminate training. Otherwise, begin the next episode.

The specifics of how the software performs these steps depend on the configuration of the agent and environment. For instance, resetting the environment at the start of each episode can include randomizing initial state values, if you configure your environment to do so. For more information on agents and their training algorithms, see Reinforcement Learning Agents. To use parallel processing and GPUs to speed up training, see Train Agents Using Parallel Computing and GPUs.

Episode Manager

By default, calling the train function opens the Reinforcement Learning Episode Manager, which lets you visualize the training progress.

Episode manager window showing the completion of the training for a DQN agent on the predefined pendulum environment.

The Episode Manager plot shows the reward for each episode (EpisodeReward) and a running average reward value (AverageReward).

For agents with a critic, Episode Q0 is the estimate of the discounted long-term reward at the start of each episode, given the initial observation of the environment. As training progresses, if the critic is well designed and learns successfully, Episode Q0 approaches in average the true discounted long-term reward, which may be offset from the EpisodeReward value because of discounting. For a well designed critic using an undiscounted reward (DiscountFactor is equal to 1), then on average Episode Q0 approaches the true episode reward, as shown in the preceding figure.

The Episode Manager also displays various episode and training statistics. You can also use the train function to return episode and training information. To turn off the Reinforcement Learning Episode Manager, set the Plots option of rlTrainingOptions to "none".

Save Candidate Agents

During training, you can save candidate agents that meet conditions you specify in the SaveAgentCriteria and SaveAgentValue options of your rlTrainingOptions object. For instance, you can save any agent whose episode reward exceeds a certain value, even if the overall condition for terminating training is not yet satisfied. For example, save agents when the episode reward is greater than 100.

opt = rlTrainingOptions(SaveAgentCriteria="EpisodeReward",SaveAgentValue=100);

train stores saved agents in a MAT-file in the folder you specify using the SaveAgentDirectory option of rlTrainingOptions. Saved agents can be useful, for instance, to test candidate agents generated during a long-running training process. For details about saving criteria and saving location, see rlTrainingOptions.

After training is complete, you can save the final trained agent from the MATLAB workspace using the save function. For example, save the agent myAgent to the file finalAgent.mat in the current working directory.

save(opt.SaveAgentDirectory + "/finalAgent.mat",'agent')

By default, when DDPG and DQN agents are saved, the experience buffer data is not saved. If you plan to further train your saved agent, you can start training with the previous experience buffer as a starting point. In this case, set the SaveExperienceBufferWithAgent option to true. For some agents, such as those with large experience buffers and image-based observations, the memory required for saving the experience buffer is large. In these cases, you must ensure that enough memory is available for the saved agents.

Validate Trained Policy

To validate your trained agent, you can simulate the agent within the training environment using the sim function. To configure the simulation, use rlSimulationOptions.

When validating your agent, consider checking how your agent handles the following:

As with parallel training, if you have Parallel Computing Toolbox™ software, you can run multiple parallel simulations on multicore computers. If you have MATLAB Parallel Server™ software, you can run multiple parallel simulations on computer clusters or cloud resources. For more information on configuring your simulation to use parallel computing, see UseParallel and ParallelizationOptions in rlSimulationOptions.

Environment Visualization

If your training environment implements the plot method, you can visualize the environment behavior during training and simulation. If you call plot(env) before training or simulation, where env is your environment object, then the visualization updates during training to allow you to visualize the progress of each episode or simulation.

Environment visualization is not supported when training or simulating your agent using parallel computing.

For custom environments, you must implement your own plot method. For more information on creating a custom environments with a plot function, see Create Custom Environment from Class Template.

See Also

Apps

Functions

Objects

Related Examples

More About