This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

Create Agent Using Deep Network Designer and Train Using Image Observations

This example shows how to create a deep Q-learning network (DQN) agent using the Deep Network Designer app to swing up and balance a pendulum modeled in MATLAB®. For more information on DQN agents, see Deep Q-Network Agents.

Pendulum Swing Up with image MATLAB Environment

The reinforcement learning environment for this example is a simple frictionless pendulum that is initially hanging in a downward position. The training goal is to make the pendulum stand upright without falling over using minimal control effort.

For this environment:

  • The upward balanced pendulum position is 0 radians, and the downward hanging position is pi radians.

  • The torque action signal from the agent to the environment is from -2 to 2 Nm.

  • The observations from the environment are the simplified grayscale image of the pendulum, and the pendulum angle derivative.

  • The reward rt, provided at every timestep, is:

rt=-(θt2+0.1θt˙2+0.001ut-12)

where:

  • θt is the angle of displacement from the upright position

  • θt˙ is the derivative of the displacement angle

  • ut-1 is the control effort from the previous time step.

For more information on this model, see Train DDPG Agent to Swing Up and Balance Pendulum with Image Observation.

Create Environment Interface

Create a predefined environment interface for the pendulum.

env = rlPredefinedEnv('SimplePendulumWithImage-Discrete');

The interface has two observations. The first observation, named "pendImage", is a 50x50 grayscale image.

obsInfo = getObservationInfo(env);
obsInfo(1)
ans = 
  rlNumericSpec with properties:

     LowerLimit: 0
     UpperLimit: 1
           Name: "pendImage"
    Description: [0×0 string]
      Dimension: [50 50 1]
       DataType: "double"

The second observation, named "angularRate", is the angular velocity of the pendulum.

obsInfo(2)
ans = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: "angularRate"
    Description: [0×0 string]
      Dimension: [1 1]
       DataType: "double"

The interface has a discrete action space where the agent can apply one of five possible torque values to the pendulum: -2, -1, 0, 1, or 2 Nm.

actInfo = getActionInfo(env)
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [-2 -1 0 1 2]
           Name: "torque"
    Description: [0×0 string]
      Dimension: [1 1]
       DataType: "double"

Fix the random generator seed for reproducibility.

rng(0)

Construct Critic Network Using Deep Network Designer

A DQN agent approximates the long-term reward given observations and actions using a critic value function representation. For this environment, the critic is a deep neural network with three inputs, the two observations and action, and one output. For more information on creating a deep neural network value function representation, see Create Policy and Value Function Representations.

You can interactively construct the critic network using the Deep Network Designer app. To do so, you first create separate input paths for each observation and action. These paths learn lower-level features from their respective inputs. You then create a common output path which combines the outputs from the input paths.

Create Image Observation Path

To create the image observation path, first drag an ImageInputLayer from the Layer Library pane to the canvas. Set the layer InputSize to 50,50,1 for the image observation, and set Normalization to none.

Second, drag a Convolution2DLayer to the canvas and connect the input of this layer to the output of the ImageInputLayer. Create a convolution layer with 2 filters (NumFilters property) that have a height and width of 10 (FilterSize property), and use a stride of 5 in the horizontal and vertical directions (Stride property).

Finally, complete the image path network with two sets of ReLULayer and FullyConnectedLayer. The OutputSize of the two FullyConnectedLayer layers are 400 and 300, respectively.

Create All Input Paths and Output Path

Construct the other input paths and output path in similar fashion. For this example, use the following options:

Angular Velocity Path (scalar input):

  • ImageInputLayer: InputSize = 1,1 and Normalization = none

  • FullyConnectedLayer: OutputSize = 400

  • ReLULayer

  • FullyConnectedLayer: OutputSize = 300

Action Path (scalar input):

  • ImageInputLayer: InputSize = 1,1 and Normalization = none

  • FullyConnectedLayer: OutputSize = 300

Output Path:

  • AdditionLayer: Connect the output of all inputs path to the input of this layer.

  • ReLULayer

  • FullyConnectedLayer: OutputSize = 1 for the scalar value function.

Export Network from Deep Network Designer

To export the network to the MATLAB workspace, in the Deep Network Designer, click Export. The Deep Network Designer exports the network to a new variable containing the network layers. You can create the critic representation using this layer network variable.

Alternatively, to generate equivalent MATLAB code for the network, click Export > Generate Code.

The generated code is:

lgraph = layerGraph();
layers = [
    imageInputLayer([1 1 1],"Name","torque","Normalization","none")
    fullyConnectedLayer(300,"Name","torque_fc1")];
lgraph = addLayers(lgraph,layers);
layers = [
    imageInputLayer([1 1 1],"Name","angularRate","Normalization","none")
    fullyConnectedLayer(400,"Name","dtheta_fc1")
    reluLayer("Name","dtheta_relu1")
    fullyConnectedLayer(300,"Name","dtheta_fc2")];
lgraph = addLayers(lgraph,layers);
layers = [
    imageInputLayer([50 50 1],"Name","pendImage","Normalization","none")
    convolution2dLayer([10 10],2,"Name","img_conv1","Stride",[5 5])
    reluLayer("Name","img_relu")
    fullyConnectedLayer(400,"Name","theta_fc1")
    reluLayer("Name","theta_relu1")
    fullyConnectedLayer(300,"Name","theta_fc2")];
lgraph = addLayers(lgraph,layers);
layers = [
    additionLayer(3,"Name","addition")
    reluLayer("Name","relu")
    fullyConnectedLayer(1,"Name","stateValue")];
lgraph = addLayers(lgraph,layers);
lgraph = connectLayers(lgraph,"torque_fc1","addition/in3");
lgraph = connectLayers(lgraph,"theta_fc2","addition/in1");
lgraph = connectLayers(lgraph,"dtheta_fc2","addition/in2");

View the critic network configuration.

figure
plot(lgraph)

Specify options for the critic representation using rlRepresentationOptions.

criticOpts = rlRepresentationOptions('LearnRate',1e-03,'GradientThreshold',1);

Create the critic representation using the specified deep neural network lgraph and options. You must also specify the action and observation info for the critic, which you obtain from the environment interface. For more information, see rlRepresentation.

critic = rlRepresentation(lgraph,obsInfo,actInfo,'Observation',{'pendImage','angularRate'},'Action',{'torque'},criticOpts);

To create the DQN agent, first specify the DQN agent options using rlDQNAgentOptions.

agentOpts = rlDQNAgentOptions(...
    'UseDoubleDQN',false, ...    
    'TargetUpdateMethod',"smoothing", ...
    'TargetSmoothFactor',1e-3, ... 
    'ExperienceBufferLength',1e6,... 
    'DiscountFactor',0.99, ...
    'SampleTime',env.Ts, ...
    'MiniBatchSize',64);
agentOpts.EpsilonGreedyExploration.EpsilonDecay = 1e-5;

Then, create the DQN agent using the specified critic representation and agent options. For more information, see rlDQNAgent.

agent = rlDQNAgent(critic,agentOpts);

Train Agent

To train the agent, first specify the training options. For this example, use the following options:

  • Run each training for at most 1000 episodes, with each episode lasting at most 500 time steps.

  • Display the training progress in the Episode Manager dialog box (set the Plots option) and disable the command line display (set the Verbose option).

  • Stop training when the agent receives an average cumulative reward greater than -1000 over five consecutive episodes. At this point, the agent can quickly balance the pendulum in the upright position using minimal control effort.

For more information, see rlTrainingOptions.

trainOpts = rlTrainingOptions(...
    'MaxEpisodes',5000,...
    'MaxStepsPerEpisode',500,...
    'Verbose',false,...
    'Plots','training-progress',...
    'StopTrainingCriteria','AverageReward',...
    'StopTrainingValue',-1000);

The pendulum system can be visualized with plot(env) during training or simulation.

plot(env)

Train the agent using the train function. This is a computationally intensive process that takes several hours to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true.

doTraining = false;

if doTraining
    % Train the agent.
    trainingStats = train(agent,env,trainOpts);
else
    % Load pretrained agent for the example.
    load('MATLABPendImageDQN.mat','agent');
end

Simulate DQN Agent

To validate the performance of the trained agent, simulate it within the pendulum environment. For more information on agent simulation, see rlSimulationOptions and sim.

simOptions = rlSimulationOptions('MaxSteps',500);
experience = sim(env,agent,simOptions);

totalReward = sum(experience.Reward)
totalReward = -888.9802

MATLAB and Simulink are registered trademarks of The MathWorks, Inc. Please see www.mathworks.com/trademarks for a list of other trademarks owned by The MathWorks, Inc. Other product or brand names are trademarks or registered trademarks of their respective owners.

See Also

|

Related Topics