Main Content

rlRepresentationOptions

(Not recommended) Options set for reinforcement learning agent representations (critics and actors)

Since R2019a

rlRepresentationOptions is not recommended. Use an rlOptimizerOptions object within an agent options object instead. For more information, see rlRepresentationOptions is not recommended.

Creation

Description

example

repOpts = rlRepresentationOptions creates a default option set to use as a last argument when creating a reinforcement learning actor or critic. You can modify the object properties using dot notation.

example

repOpts = rlRepresentationOptions(Name,Value) creates an options set with the specified Properties using one or more name-value pair arguments.

Properties

expand all

Learning rate for the representation, specified as a positive scalar. If the learning rate is too low, then training takes a long time. If the learning rate is too high, then training might reach a suboptimal result or diverge.

Example: 'LearnRate',0.025

Optimizer for training the network of the representation, specified as one of the following values.

  • "adam" — Use the Adam optimizer. You can specify the decay rates of the gradient and squared gradient moving averages using the GradientDecayFactor and SquaredGradientDecayFactor fields of the OptimizerParameters option.

  • "sgdm" — Use the stochastic gradient descent with momentum (SGDM) optimizer. You can specify the momentum value using the Momentum field of the OptimizerParameters option.

  • "rmsprop" — Use the RMSProp optimizer. You can specify the decay rate of the squared gradient moving average using the SquaredGradientDecayFactor fields of the OptimizerParameters option.

For more information about these optimizers, see the Algorithms section of trainingOptions in Deep Learning Toolbox™.

Example: 'Optimizer',"sgdm"

Applicable parameters for the optimizer, specified as an OptimizerParameters object with the following parameters.

ParameterDescription
Momentum

Contribution of previous step, specified as a scalar from 0 to 1. A value of 0 means no contribution from the previous step. A value of 1 means maximal contribution.

This parameter applies only when Optimizer is "sgdm". In that case, the default value is 0.9. This default value works well for most problems.

Epsilon

Denominator offset, specified as a positive scalar. The optimizer adds this offset to the denominator in the network parameter updates to avoid division by zero.

This parameter applies only when Optimizer is "adam" or "rmsprop". In that case, the default value is 10–8. This default value works well for most problems.

GradientDecayFactor

Decay rate of gradient moving average, specified as a positive scalar from 0 to 1.

This parameter applies only when Optimizer is "adam". In that case, the default value is 0.9. This default value works well for most problems.

SquaredGradientDecayFactor

Decay rate of squared gradient moving average, specified as a positive scalar from 0 to 1.

This parameter applies only when Optimizer is "adam" or "rmsprop". In that case, the default value is 0.999. This default value works well for most problems.

When a particular property of OptimizerParameters is not applicable to the optimizer type specified in the Optimizer option, that property is set to "Not applicable".

To change the default values, create an rlRepresentationOptions set and use dot notation to access and change the properties of OptimizerParameters.

repOpts = rlRepresentationOptions;
repOpts.OptimizerParameters.GradientDecayFactor = 0.95;

Threshold value for the representation gradient, specified as Inf or a positive scalar. If the gradient exceeds this value, the gradient is clipped as specified by the GradientThresholdMethod option. Clipping the gradient limits how much the network parameters change in a training iteration.

Example: 'GradientThreshold',1

Gradient threshold method used to clip gradient values that exceed the gradient threshold, specified as one of the following values.

  • "l2norm" — If the L2 norm of the gradient of a learnable parameter is larger than GradientThreshold, then scale the gradient so that the L2 norm equals GradientThreshold.

  • "global-l2norm" — If the global L2 norm, L, is larger than GradientThreshold, then scale all gradients by a factor of GradientThreshold/L. The global L2 norm considers all learnable parameters.

  • "absolute-value" — If the absolute value of an individual partial derivative in the gradient of a learnable parameter is larger than GradientThreshold, then scale the partial derivative to have magnitude equal to GradientThreshold and retain the sign of the partial derivative.

For more information, see Gradient Clipping in the Algorithms section of trainingOptions in Deep Learning Toolbox.

Example: 'GradientThresholdMethod',"absolute-value"

Factor for L2 regularization (weight decay), specified as a nonnegative scalar. For more information, see L2 Regularization in the Algorithms section of trainingOptions in Deep Learning Toolbox.

To avoid overfitting when using a representation with many parameters, consider increasing the L2RegularizationFactor option.

Example: 'L2RegularizationFactor',0.0005

Computation device used to perform deep neural network operations such as gradient computation, parameter update and prediction during training. It is specified as either "cpu" or "gpu".

The "gpu" option requires both Parallel Computing Toolbox™ software and a CUDA® enabled NVIDIA® GPU. For more information on supported GPUs see GPU Computing Requirements (Parallel Computing Toolbox).

You can use gpuDevice (Parallel Computing Toolbox) to query or select a local GPU device to be used with MATLAB®.

Note

Training or simulating an agent on a GPU involves device-specific numerical round off errors. These errors can produce different results compared to performing the same operations a CPU.

Note that if you want to use parallel processing to speed up training, you do not need to set UseDevice. Instead, when training your agent, use an rlTrainingOptions object in which the UseParallel option is set to true. For more information about training using multicore processors and GPUs for training, see Train Agents Using Parallel Computing and GPUs.

Example: 'UseDevice',"gpu"

Object Functions

rlValueRepresentation(Not recommended) Value function critic representation for reinforcement learning agents
rlQValueRepresentation (Not recommended) Q-Value function critic representation for reinforcement learning agents
rlDeterministicActorRepresentation(Not recommended) Deterministic actor representation for reinforcement learning agents
rlStochasticActorRepresentation(Not recommended) Stochastic actor representation for reinforcement learning agents

Examples

collapse all

Create an options set for creating a critic or actor representation for a reinforcement learning agent. Set the learning rate for the representation to 0.05, and set the gradient threshold to 1. You can set the options using Name,Value pairs when you create the options set. Any options that you do not explicitly set have their default values.

repOpts = rlRepresentationOptions(LearnRate=5e-2,...
                                  GradientThreshold=1)
repOpts = 
  rlRepresentationOptions with properties:

                  LearnRate: 0.0500
          GradientThreshold: 1
    GradientThresholdMethod: "l2norm"
     L2RegularizationFactor: 1.0000e-04
                  UseDevice: "cpu"
                  Optimizer: "adam"
        OptimizerParameters: [1x1 rl.option.OptimizerParameters]

Alternatively, create a default options set and use dot notation to change some of the values.

repOpts = rlRepresentationOptions;
repOpts.LearnRate = 5e-2;
repOpts.GradientThreshold = 1
repOpts = 
  rlRepresentationOptions with properties:

                  LearnRate: 0.0500
          GradientThreshold: 1
    GradientThresholdMethod: "l2norm"
     L2RegularizationFactor: 1.0000e-04
                  UseDevice: "cpu"
                  Optimizer: "adam"
        OptimizerParameters: [1x1 rl.option.OptimizerParameters]

If you want to change the properties of the OptimizerParameters option, use dot notation to access them.

repOpts.OptimizerParameters.Epsilon = 1e-7;
repOpts.OptimizerParameters
ans = 
  OptimizerParameters with properties:

                      Momentum: "Not applicable"
                       Epsilon: 1.0000e-07
           GradientDecayFactor: 0.9000
    SquaredGradientDecayFactor: 0.9990

Version History

Introduced in R2019a

expand all