getLearnableParameterValues

Obtain learnable parameter values from policy or value function representation

Description

example

val = getLearnableParameterValues(rep) returns the values of the learnable parameters from the reinforcement learning policy or value function representation rep.

Examples

collapse all

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.

load('DoubleIntegDDPG.mat','agent') 

Obtain the critic representation from the agent.

critic = getCritic(agent);

Obtain the learnable parameters from the critic.

params = getLearnableParameterValues(critic);

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);

Set the parameter values of the critic to the new modified values.

critic = setLearnableParameterValues(critic,modifiedParams);

Set the critic in the agent to the new modified critic.

agent = setCritic(agent,critic);

Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Train DDPG Agent to Control Double Integrator System.

load('DoubleIntegDDPG.mat','agent') 

Obtain the actor representation from the agent.

actor = getActor(agent);

Obtain the learnable parameters from the actor.

params = getLearnableParameterValues(actor);

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);

Set the parameter values of the actor to the new modified values.

actor = setLearnableParameterValues(actor,modifiedParams);

Set the actor in the agent to the new modified actor.

agent = setActor(agent,actor);

Input Arguments

collapse all

Policy or value function representation, specified as one of the following:

  • rlLayerRepresentation object for deep neural network representations

  • rlTableRepresentation object for value table or Q table representations

To create a policy or value function representation, use one of the following methods:

  • Create a representation using rlRepresentation.

  • Obtain the existing value function representation from an agent using getCritic

  • Obtain the existing policy representation from an agent using getActor.

Output Arguments

collapse all

Learnable parameter values for the representation object, returned as a cell array. You can modify these parameter values and set them in the original agent or a different agent using the setLearnableParameterValues function.

Introduced in R2019a