Train DQN Agent for Lane Keeping Assist Using Parallel Computing

This example extends the example Train DQN Agent for Lane Keeping Assist to demonstrate parallel training for a deep Q-learning network (DQN) agent for lane-keeping assist (LKA) in Simulink®.

For more information on DQN agents, see Deep Q-Network Agents. For an example that trains a DQN agent in MATLAB®, see Train DQN Agent to Balance Cart-Pole System.

DQN Parallel Training Overview

For DQN, each worker generates new experiences from its copy of the agent and the environment. After every N steps, the worker sends experiences to the host agent. The host agent will updates its parameters as follows:

  • For asynchronous training, the host agent learns from the received experiences and sends the updated parameters back to the worker that provided the experiences. Then, the worker continues to generate experiences from its environment using the updated parameters.

  • For synchronous training, the host agent waits to receive experiences from all of the workers and learns from these experiences. The host then sends updated parameters to all the workers at the same time. Then, all workers continue to generate experiences using the updated parameters.

Simulink Model for Ego Car

The reinforcement learning environment for this example is a simple bicycle model for ego vehicle dynamics. The training goal is to keep the ego vehicle traveling along the centerline of the lanes by adjusting the front steering angle. This example uses the same vehicle model as in Train DQN Agent for Lane Keeping Assist.

m = 1575;   % total vehicle mass (kg)
Iz = 2875;  % yaw moment of inertia (mNs^2)
lf = 1.2;   % longitudinal distance from center of gravity to front tires (m)
lr = 1.6;   % longitudinal distance from center of gravity to rear tires (m)
Cf = 19000; % cornering stiffness of front tires (N/rad)
Cr = 33000; % cornering stiffness of rear tires (N/rad)
Vx = 15;    % longitudinal velocity (m/s)

Define the sample time, Ts, and simulation duration, T, in seconds.

Ts = 0.1;
T = 15;

The output of the LKA system is the front steering angle of the ego car. Considering the physical limitations of the ego car, the steering angle is constrained to the range [-0.5,0.5] rad.

u_min = -0.5;
u_max = 0.5;

The curvature of the road is defined by a constant 0.001(m-1). The initial value for lateral deviation is 0.2 m and the initial value for relative yaw angle is -0.1 rad.

rho = 0.001;
e1_initial = 0.2;
e2_initial = -0.1;

Open the model.

mdl = 'rlLKAMdl';
open_system(mdl)
agentblk = [mdl '/RL Agent'];

For this model:

  • The steering-angle action signal from the agent to the environment is from -15 deg to 15 deg.

  • The observations from the environment are the lateral deviation e1, relative yaw angle e2, their derivatives e˙1 and e˙2, and their integrals e1 and e2.

  • The simulation is terminated when lateral deviation |e1|>1.

  • The reward rt, provided at every time step t, is:

rt=-(10e12+5e22+2u2+5e˙12+5e˙22)

where u is the control input from the previous time step t-1.

Create Environment Interface

Create a RL environment interface for ego vehicle.

% create observation info
observationInfo = rlNumericSpec([6 1],'LowerLimit',-inf*ones(6,1),'UpperLimit',inf*ones(6,1));
observationInfo.Name = 'observations';
observationInfo.Description = 'information on lateral deviation and relative yaw angle';
% create action Info
actionInfo = rlFiniteSetSpec((-15:15)*pi/180);
actionInfo.Name = 'steering';
% define environment
env = rlSimulinkEnv(mdl,agentblk,observationInfo,actionInfo);

The interface has a discrete action space where the agent can apply one of 31 possible steering angles from -15 degrees to 15 degrees.

To define the initial condition for lateral deviation and relative yaw angle, specify an environment reset function using an anonymous function handle.

% randomize initial values for lateral deviation and relative yaw angle
env.ResetFcn = @(in)localResetFcn(in);

Fix the random generator seed for reproducibility.

rng(0)

Create DQN agent

A DQN agent approximates the long-term reward given observations and actions using a critic value function representation. To create the critic, first create a deep neural network with two inputs, the state and action, and one output. For more information on creating a deep neural network value function representation, see Create Policy and Value Function Representations.

L = 24; % number of neurons
statePath = [
    imageInputLayer([6 1 1],'Normalization','none','Name','state')
    fullyConnectedLayer(L,'Name','fc1')
    reluLayer('Name','relu1')
    fullyConnectedLayer(L,'Name','fc2')
    additionLayer(2,'Name','add')
    reluLayer('Name','relu2')
    fullyConnectedLayer(L,'Name','fc3')
    reluLayer('Name','relu3')
    fullyConnectedLayer(1,'Name','fc4')];

actionPath = [
    imageInputLayer([1 1 1],'Normalization','none','Name','action')
    fullyConnectedLayer(L, 'Name', 'fc5')];

criticNetwork = layerGraph(statePath);
criticNetwork = addLayers(criticNetwork,actionPath);    
criticNetwork = connectLayers(criticNetwork,'fc5','add/in2');

Specify options for the critic representation using rlRepresentationOptions.

criticOpts = rlRepresentationOptions('LearnRate',1e-3,'GradientThreshold',1);

Create the critic representation using the specified deep neural network and options. You must also specify the action and observation info for the critic, which you obtain from the environment interface. For more information, see rlRepresentation.

critic = rlRepresentation(criticNetwork,observationInfo,actionInfo,'Observation',{'state'},'Action',{'action'},criticOpts);

To create the DQN agent, first specify the DQN agent options using rlDQNAgentOptions.

agentOpts = rlDQNAgentOptions(...
    'SampleTime',Ts,...
    'UseDoubleDQN',true,...
    'TargetSmoothFactor',1e-3,...
    'DiscountFactor',0.99,...
    'ExperienceBufferLength',1e6,...
    'MiniBatchSize',64);

Then, create the DQN agent using the specified critic representation and agent options. For more information, see rlDQNAgent.

agent = rlDQNAgent(critic,agentOpts);

Parallel Training Options

To train the agent, first specify the training options. For this example, use the following options:

  • Run each training for at most 5000 episodes, with each episode lasting at most 150 time steps.

  • Display the training progress in the Episode Manager dialog box.

  • Stop training when the episode reward reaches -1.

  • Save a copy of the agent for each episode where the cumulative reward is greater than -2.5.

For more information, see rlTrainingOptions.

maxepisodes = 5000;
maxsteps = ceil(T/Ts);
trainOpts = rlTrainingOptions(...
    'MaxEpisodes',maxepisodes, ...
    'MaxStepsPerEpisode',maxsteps, ...
    'Verbose',false,...
    'Plots','training-progress',...
    'StopTrainingCriteria','EpisodeReward',...
    'StopTrainingValue', -1,...
    'SaveAgentCriteria','EpisodeReward',...
    'SaveAgentValue',-2.5);

To train the agent in parallel, specify the following training options.

  • Set UseParallel option to true.

  • Train agent in parallel asynchronously by setting the ParallelizationOptions.Mode option to "async".

  • After every 30 steps, each worker sends experiences to the host.

  • DQN agent requires workers to send "experiences" to the host.

trainOpts.UseParallel = true;
trainOpts.ParallelizationOptions.Mode = "async";
trainOpts.ParallelizationOptions.DataToSendFromWorkers = "experiences";
trainOpts.ParallelizationOptions.StepsUntilDataIsSent = 30;
trainOpts.ParallelizationOptions.WorkerRandomSeeds = -1;

For more information, see rlTrainingOptions.

Train Agent

Train the agent using the train function. This is a computationally intensive process that takes several minutes to complete. To save time while running this example, load a pretrained agent by setting doTraining to false. To train the agent yourself, set doTraining to true. Due to randomness of the parallel training, you may expect different training results from the plot below. The example is trained with four workers.

doTraining = false;

if doTraining
    % Train the agent.
    trainingStats = train(agent,env,trainOpts);
else
    % Load pretrained agent for the example.
    load('SimulinkLKADQNParallel.mat','agent')
end

Simulate DQN Agent

To validate the performance of the trained agent, uncomment the following two lines and simulate it within the environment. For more information on agent simulation, see rlSimulationOptions and sim.

% simOptions = rlSimulationOptions('MaxSteps',maxsteps);
% experience = sim(env,agent,simOptions);

To demonstrate the trained agent on deterministic initial conditions, simulate the model in Simulink.

e1_initial = -0.4;
e2_initial = 0.2;
sim(mdl)

As shown below, the lateral error (middle plot) and relative yaw angle (bottom plot) are both driven to zero. The vehicle starts from off centerline (-0.4 m) and non-zero yaw angle error (0.2 rad). The lane keeping assist makes the ego car traveling along the centerline around 2.5 seconds. The steering angle (top plot) shows that the controller reaches steady-state after 2 seconds.

Local Function

function in = localResetFcn(in)
% reset
in = setVariable(in,'e1_initial', 0.5*(-1+2*rand)); % random value for lateral deviation
in = setVariable(in,'e2_initial', 0.1*(-1+2*rand)); % random value for relative yaw angle
end

See Also

Related Topics