Create options for PG agent
opt = rlPGAgentOptions
rlPGAgentOptions object for use as an argument when creating a PG agent
using all default settings. You can modify the object properties using dot notation.
Create a PG agent options object, specifying the discount factor.
opt = rlPGAgentOptions('DiscountFactor',0.9)
opt = rlPGAgentOptions with properties: UseBaseline: 1 EntropyLossWeight: 0 SampleTime: 1 DiscountFactor: 0.9000
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;
comma-separated pairs of
the argument name and
Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
'UseBaseline'— Use baseline for learning
Instruction to use baseline for learning, specified as the comma-separated pair
'UseBaseline' and logical
UseBaseline is true, you must
specify a critic network as the baseline function approximator.
In general, for simpler problems with smaller actor networks, PG agents work better without a baseline.
'SampleTime'— Sample time of agent
1(default) | numeric value
Sample time of agent, specified as the comma-separated pair consisting of
'SampleTime' and a numeric value.
'DiscountFactor'— Discount factor
0.99| numeric value
Discount factor applied to future rewards during training, specified as the
comma-separated pair consisting of
'DiscountFactor' and a positive
numeric value less than or equal to 1.
'EntropyLossWeight'— Entropy loss weight
0(default) | scalar value between
Entropy loss weight, specified as the comma-separated pair consisting of
'EntropyLossWeight' and a scalar value between
1. A higher loss weight value promotes
agent exploration by applying a penalty for being too certain about which action to
take. Doing so can help the agent move out of local optima.
The entropy loss function for episode step t is:
E is the entropy loss weight.
M is the number of possible actions.
μk(St) is the probability of taking action Ak following the current policy.
When gradients are computed during training, an additional gradient component is computed for minimizing this loss function.