Main Content

REINFORCE Policy Gradient (PG) Agent

The REINFORCE policy gradient (PG) algorithm is an on-policy reinforcement learning method for environments with a discrete or continuous action space. The REINFORCE policy gradient agent (also referred to sometimes as Monte Carlo policy gradient or vanilla policy gradient) is a policy-based reinforcement learning agent that uses the REINFORCE algorithm to search for an optimal stochastic policy (a stochastic policy that maximizes the expected discounted cumulative long-term reward). As this algorithm belongs to the class of Monte Carlo methods, the agent does not learn during an episode but only after an episode is finished.

To reduce the variance of the parameter updates you can use a baseline value function critic that estimates the expected discounted cumulative long-term reward. Note that such baseline does not fully act as a critic as it is not used for bootstrapping (that it is not used to update the value estimate of a state based on the value estimates of subsequent states).

Note

The PG agent does not generally have any functional advantage with respect to more recent agents such as PPO and is provided mostly for educational purposes.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

In Reinforcement Learning Toolbox™, a REINFORCE actor-critic agent is implemented by an rlPGAgent object.

Policy gradient agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Discrete or continuousDiscrete or continuous

Policy gradient agents use the following actor and critic.

Critic (if a baseline is used)Actor

Value function critic V(S), which you create using rlValueFunction

Stochastic policy actor π(S), which you create using rlDiscreteCategoricalActor (for a for discrete action space) or rlContinuousGaussianActor (for a continuous action space).

During training, a PG agent:

  • Estimates probabilities of taking each action in the action space and randomly selects actions based on the probability distribution.

  • Completes a full training episode using the current policy before learning from the experience and updating the policy parameters.

If the UseExplorationPolicy option of the agent is set to false, the action with maximum likelihood is always used in sim and generatePolicyFunction. As a result, the simulated agent and generated policy behave deterministically.

If the UseExplorationPolicy is set to true the agent selects its actions by sampling its probability distribution. As a result the policy is stochastic and the agent explores its observation space.

Note

The UseExplorationPolicy option affects only simulation and deployment; it does not affect training. When you train an agent using train, the agent always uses its exploration policy independently of the value of this property.

Actor and Critic Used by the PG Agent

Policy gradient agents represent the policy using an actor function approximator π(A|S;θ) with parameters θ. The actor outputs the conditional probability of taking each action A when in state S as one of the following:

  • Discrete action space — The probability of taking each discrete action. The sum of these probabilities across all actions is 1.

  • Continuous action space — The mean and standard deviation of the Gaussian probability distribution for each continuous action.

During training, the actor tunes the parameter values in θ to improve the policy. Similarly, during training, the critic (if used) tunes the parameter values in ϕ to improve its value function estimation. After training, the parameters remain at their tuned values in the actor and critic internal to the trained agent.

To reduce the variance of the parameter updates during gradient estimation, REINFORCE policy gradient agents can use a baseline value function, which is estimated using a critic function approximator, V(S;ϕ) with parameters ϕ. The critic computes the value function for a given observation state.

For more information on actors and critics, see Create Policies and Value Functions.

PG Agent Creation

You can create a REINFORCE policy gradient agent with default actor and critic based on the observation and action specifications from the environment. To do so, perform the following steps.

  1. Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using getObservationInfo.

  2. Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using getActionInfo.

  3. If needed, specify the number of neurons in each learnable layer of the default network or whether to use an LSTM layer. To do so, create an agent initialization option object using rlAgentInitializationOptions.

  4. If needed, specify agent options using an rlPGAgentOptions object. Alternatively, you can skip this step and modify the agent options later using dot notation.

  5. Create the agent using rlPGAgent.

Alternatively, you can create actor and critic and use these objects to create your agent. In this case, ensure that the input and output dimensions of the actor and critic match the corresponding action and observation specifications of the environment.

  1. Create observation specifications for your environment. If you already have an environment object, you can obtain these specifications using getObservationInfo.

  2. Create action specifications for your environment. If you already have an environment object, you can obtain these specifications using getActionInfo.

  3. Create an approximation model for your actor. For continuous action spaces, this model must be a neural network object. For discrete action spaces, you also have the option of using a custom basis function with initial parameter values.

  4. Create an actor using rlDiscreteCategoricalActor (for discrete action spaces) or rlContinuousGaussianActor (for continuous action spaces). Use the model you created in the previous step as a first input argument.

  5. If you are using a baseline function, create an approximation model for your critic. For continuous action spaces you must use either a custom basis function with initial parameter values or a neural network object. For discrete action space you also have the option of using an rlTable object.

  6. If you are using a baseline function, create a critic using rlValueFunction. Use the model you created in the previous step as a first input argument.

  7. Specify agent options using the rlPGAgentOptions object. Alternatively, you can skip this step and modify the agent options later using dot notation.

  8. Create the agent using rlPGAgent.

For more information on creating actors and critics for function approximation, see Create Policies and Value Functions.

REINFORCE Training Algorithm

PG agents use the REINFORCE (also known as Monte Carlo policy gradient) algorithm either with or without a baseline. To configure the training algorithm, specify options using an rlPGAgentOptions object.

REINFORCE Algorithm

  1. Initialize the actor π(S;θ) with random parameter values in θ.

  2. For each training episode, generate the episode experience by following the current policy π(S):

    1. At the beginning of each episode, get the initial observation S1 from the environment.

    2. For the current observation St, select the action At using the policy in π(A|S;θ).

    3. Execute action At. Observe the reward Rt+1 and the next observation St+1.

    4. Store the experience (St,At,Rt+1,St+1).

    The agent takes actions until it reaches the terminal state corresponding to observation ST. The episode experience consists of the sequence

    S0,A0,R1,S1,,ST1,AT1,RT,ST

  3. For each state in the episode sequence, that is, for t = 1, 2, …, T-1, calculate the return Gt, which is the discounted future reward.

    Gt=k=tTγktRk

  4. Accumulate the gradients for the actor network by following the gradient of the policy to maximize the expected discounted cumulative long-term reward. If the EntropyLossWeight option is greater than zero, then additional gradients are accumulated to maximize the entropy loss function.

    dθ=t=1T1Gtθ(lnπ(St;θ)+wt(θ,St))

    Here, t(θ,St) is the entropy loss and w is the entropy loss weight factor, specified using the EntropyLossWeight option. For more information on entropy loss, see Entropy Loss.

  5. Update the actor parameters by applying the gradients.

    θ=θ+αdθ

    Here, α is the learning rate of the actor. Specify the learning rate when you create the actor by setting the LearnRate option in the rlActorOptimizerOptions property within the agent options object. For simplicity, this step shows a gradient update using basic stochastic gradient descent. The actual gradient update method depends on the optimizer you specify in the rlOptimizerOptions object assigned to the rlActorOptimizerOptions property.

  6. Repeat steps 2 through 5 for each training episode until training is complete.

REINFORCE with Baseline Algorithm

  1. Initialize the actor π(A|S;θ) with random parameter values in θ.

  2. Initialize the baseline V(S;ϕ) with random parameter values in ϕ.

  3. For each training episode, generate the episode experience by following the current policy π(A|S;θ), as described in the REINFORCE Algorithm section, until a terminal state is reached. The episode experience consists of the sequence

    S0,A0,R1,S1,,ST1,AT1,RT,ST

  4. For t = 1, 2, …, T:

    • Calculate the return Gt, which is the discounted future reward.

      Gt=k=tTγktRk

    • Compute the advantage function δt using the baseline value function estimate from the critic.

      δt=GtV(St;ϕ)

  5. Accumulate the gradients for the critic network.

    dϕ=t=1T1δtϕV(St;ϕ)

  6. Accumulate the gradients for the actor network. If the EntropyLossWeight option is greater than zero, then additional gradients are accumulated to maximize the entropy loss function.

    dθ=t=1T1δtθ(lnπ(St;θ)+wt(θ,St))

    Here, t(θ,St) is the entropy loss and w is the entropy loss weight factor, specified using the EntropyLossWeight option. For more information on entropy loss, see Entropy Loss.

  7. Update the critic parameters ϕ.

    ϕ=ϕ+βdϕ

    Here, β is the learning rate of the critic. Specify the learning rate when you create the critic by setting the LearnRate option in the rlCriticOptimizerOptions property within the agent options object.

  8. Update the actor parameters θ.

    θ=θ+αdθ

  9. Repeat steps 3 through 8 for each training episode until training is complete.

For simplicity, the actor and critic updates in this algorithm show a gradient update using basic stochastic gradient descent. The actual gradient update method depends on the optimizer you specify in the rlOptimizerOptions object assigned to the rlCriticOptimizerOptions property.

Entropy Loss

To promote agent exploration, you can subtract an entropy loss term wi(θ,Si) from the actor loss function, where w is the entropy loss weight and i(θ,Si) is the entropy.

The entropy value is higher when the agent is more uncertain about which action to take next. Therefore, maximizing the entropy loss term (minimizing the negative entropy loss) increases the agent uncertainty, thus encouraging exploration. To promote additional exploration, which can help the agent move out of local optima, you can specify a larger entropy loss weight.

For a discrete action space, the agent uses the following entropy value. In this case, the actor outputs the probability of taking each possible discrete action.

i(θ,Si)=k=1Pπ(Ak|Si;θ)lnπ(Ak|Si;θ)

Here:

  • P is the number of possible discrete actions.

  • π(Ak|Si;θ) is the probability of taking action Ak when in state Si following the current policy.

For a continuous action space, the agent uses the following entropy value. In this case, the actor outputs the mean and standard deviation of the Gaussian distribution for each continuous action.

i(θ,Si)=12k=1Cln(2πeσk,i2)

Here:

  • C is the number of continuous actions output by the actor.

  • σk,i is the standard deviation for action k when in state Si following the current policy.

References

[1] Williams, Ronald J. “Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning.” Machine Learning 8, no. 3–4 (May 1992): 229–56. https://doi.org/10.1007/BF00992696.

[2] Sutton, Richard S., and Andrew G. Barto. Reinforcement Learning: An Introduction. Second edition. Adaptive Computation and Machine Learning. Cambridge, Mass: The MIT Press, 2018.

See Also

Objects

Topics