setLearnableParameterValues

Set learnable parameter values of policy or value function representation

Description

example

newRep = setLearnableParameterValues(oldRep,val) returns a new policy or value function representation, newRep, with the same structure as the original representation, oldRep, and the learnable parameter values specified in val.

Examples

collapse all

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.

load('DoubleIntegDDPG.mat','agent') 

Obtain the critic representation from the agent.

critic = getCritic(agent);

Obtain the learnable parameters from the critic.

params = getLearnableParameterValues(critic);

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);

Set the parameter values of the critic to the new modified values.

critic = setLearnableParameterValues(critic,modifiedParams);

Set the critic in the agent to the new modified critic.

agent = setCritic(agent,critic);

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.

load('DoubleIntegDDPG.mat','agent') 

Obtain the actor representation from the agent.

actor = getActor(agent);

Obtain the learnable parameters from the actor.

params = getLearnableParameterValues(actor);

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);

Set the parameter values of the actor to the new modified values.

actor = setLearnableParameterValues(actor,modifiedParams);

Set the actor in the agent to the new modified actor.

agent = setActor(agent,actor);

Input Arguments

collapse all

Original policy or value function representation, specified as one of the following:

  • rlLayerRepresentation object for deep neural network representations

  • rlTableRepresentation object for value table or Q table representations

To create a policy or value function representation, use one of the following methods:

  • Create a representation using rlRepresentation.

  • Obtain the existing value function representation from an agent using getCritic

  • Obtain the existing policy representation from an agent using getActor.

Learnable parameter values for the representation object, specified as a cell array. The parameters in val must be compatible with the structure and parameterization of oldRep.

To obtain a cell array of learnable parameter values from an existing representation, which you can then modify, use the getLearnableParameterValues function.

Output Arguments

collapse all

New policy or value function representation, returned as a representation object of the same type as oldRep. newRep has the same structure as oldRep but with parameter values from val.

Introduced in R2019a