The proximal policy optimization (PPO) is a model-free, online, on-policy, policy gradient reinforcement learning method. This algorithm is a type of policy gradient training that alternates between sampling data through environmental interaction and optimizing a clipped surrogate objective function using stochastic gradient descent. The clipped surrogate objective function improves training stability by limiting the size of the policy change at each step. [1]

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

PPO agents can be trained in environments with the following observation and action spaces.

Observation Space | Action Space |
---|---|

Discrete or continuous | Discrete or continuous |

During training, a PPO agent:

Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.

Interacts with the environment for multiple steps using the current policy before using mini-batches to update the actor and critic properties over multiple epochs.

To estimate the policy and value function, a PPO agent maintains two function approximators:

Actor

*μ*(*S*) — The actor takes observation*S*and outputs the probabilities of taking each action in the action space when in state*S*.Critic

*V*(*S*) — The critic takes observation*S*and outputs the corresponding expectation of the discounted long-term reward.

When training is complete, the trained optimal policy is stored in actor
*μ*(*S*).

For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.

To create a PPO agent:

Create an actor using an

`rlStochasticActorRepresentation`

object.Create a critic using an

`rlValueRepresentation`

object.Specify agent options using an

`rlPPOAgentOptions`

object.Create the agent using the

`rlPPOAgent`

function.

PPO agents support actors and critics that use recurrent deep neural networks as functions approximators.

PPO agents use the following training algorithm. To configure the training algorithm,
specify options using an `rlPPOAgentOptions`

.

Initialize the actor

*μ*(*S*) with random parameter values*θ*._{μ}Initialize the critic

*V*(*S*) with random parameter values*θ*._{V}Generate

*N*experiences by following the current policy. The experience sequence is:$${S}_{ts},{A}_{ts},{R}_{ts+1},{S}_{ts+1},\dots ,{S}_{ts+N-1},{A}_{ts+N-1},{R}_{ts+N},{S}_{ts+N}$$

Here,

*S*is a state observation,_{t}*A*is an action taken from that state,_{t}*S*is the next state, and_{t+1}*R*is the reward received for moving from_{t+1}*S*to_{t}*S*._{t+1}When in state

*S*, the agent computes the probability of taking each action in the action space using_{t}*μ*(*S*) and randomly selects action_{t}*A*based on the probability distribution._{t}*ts*is the starting time step of the current set of*N*experiences. At the beginning of the training episode,*ts*= 1. For each subsequent set of*N*experiences in the same training episode,*ts*←*ts*+*N*.For each experience sequence that does not contain a terminal state,

*N*is equal to the`ExperienceHorizon`

option value. Otherwise,*N*is less than`ExperienceHorizon`

and*S*is the terminal state._{N}For each episode step

*t*=*ts*+1,*ts*+2, …,*ts*+*N*, compute the return and advantage function using the method specified by the`AdvantageEstimateMethod`

option.**Finite Horizon**(`AdvantageEstimateMethod = "finite-horizon"`

) — Compute the return*G*, which is the sum of the reward for that step and the discounted future reward. [2]_{t}$${G}_{t}={\displaystyle \sum _{k=t}^{ts+N}\left({\gamma}^{k-t}{R}_{k}\right)}+b{\gamma}^{N-t+1}V\left({S}_{ts+N}|{\theta}_{V}\right)$$

Here,

*b*is`0`

if*S*is a terminal state and_{ts+N}`1`

otherwise. That is, if*S*is not a terminal state, the discounted future reward includes the discounted state value function, computed using the critic network_{ts+N}*V*.Compute the advantage function

*D*._{t}$${D}_{t}={G}_{t}-V\left({S}_{t}|{\theta}_{V}\right)$$

**Generalized Advantage Estimator**(`AdvantageEstimateMethod = "gae"`

) — Compute the advantage function*D*, which is the discounted sum of temporal difference errors. [3]_{t}$$\begin{array}{c}{D}_{t}={\displaystyle \sum _{k=t}^{ts+N-1}{\left(\gamma \lambda \right)}^{k-t}{\delta}_{k}}\\ {\delta}_{k}={R}_{t}+b\gamma V\left({S}_{t}|{\theta}_{V}\right)\end{array}$$

Here,

*b*is`0`

if*S*is a terminal state and_{ts+N}`1`

otherwise.*λ*is a smoothing factor specified using the`GAEFactor`

option.Compute the return

*G*._{t}$${G}_{t}={D}_{t}-V\left({S}_{t}|{\theta}_{V}\right)$$

To specify the discount factor

*γ*for either method, use the`DiscountFactor`

option.Learn from experience mini-batches over

*K*epochs. To specify*K*, use the`NumEpoch`

option. For each learning epoch:Sample a random mini-batch data set of size

*M*from the current set of experience. To specify*M*, use the`MiniBatchSize`

option. Each element of the mini-batch data set contains a current experience and the corresponding return and advantage function values.Update the critic parameters by minimizing the loss

*L*across all sampled mini-batch data._{critic}$${L}_{critic}\left({\theta}_{V}\right)=\frac{1}{M}{\displaystyle \sum _{i=1}^{M}{\left({G}_{i}-V\left({S}_{i}|{\theta}_{V}\right)\right)}^{2}}$$

Update the actor parameters by minimizing the loss

*L*across all sampled mini-batch data. If the_{actor}`EntropyLossWeight`

option is greater than zero, then additional entropy loss is added to*L*, which encourages policy exploration._{actor}$$\begin{array}{c}{L}_{actor}\left({\theta}_{\mu}\right)=-\frac{1}{M}{\displaystyle \sum _{i=1}^{M}\mathrm{min}\left({r}_{i}\left({\theta}_{\mu}\right)\ast {D}_{i},{c}_{i}\left({\theta}_{\mu}\right)\ast {D}_{i}\right)}\\ {r}_{i}\left({\theta}_{\mu}\right)=\frac{{\mu}_{Ai}\left({S}_{i}|{\theta}_{\mu}\right)}{{\mu}_{Ai}\left({S}_{i}|{\theta}_{\mu ,old}\right)}\\ {c}_{i}\left({\theta}_{\mu}\right)=\mathrm{max}\left(\mathrm{min}\left({r}_{i}\left({\theta}_{\mu}\right),1+\epsilon \right),1-\epsilon \right)\end{array}$$

Here:

*D*,_{i}*G*are the advantage function and return value for the_{i}*i*th element of the mini-batch, respectively.*μ*(_{i}*S*|_{i}*θ*) is the probability of taking action_{μ}*A*when in state_{i}*S*, given the updated policy parameters_{i}*θ*._{μ}*μ*(_{i}*S*|_{i}*θ*) is the probability of taking action_{μ,old}*A*when in state_{i}*S*, given the previous policy parameters (_{i}*θ*) from before the current learning epoch._{μ,old}*ε*is the clip factor specified using the`ClipFactor`

option.

Repeat steps 3 through 5 until the training episode reaches a terminal state.

[1] Schulman, J., et al. "Proximal
Policy Optimization Algorithms," Technical Report, *ArXiv*,
2017.

[2] Mnih, V., et al. "Asynchronous
methods for deep reinforcement learning," *International Conference on Machine
Learning*, 2016.

[3] Schulman, J., et al.
"High-Dimensional Continuous Control Using Generalized Advantage Estimation," Technical
Report, *ArXiv*, 2018.

`rlPPOAgent`

| `rlPPOAgentOptions`