Deep Q-Network Agents

The deep Q-network (DQN) algorithm is a model-free, online, off-policy reinforcement learning method. A DQN agent is a value-based reinforcement learning agent that trains a critic to estimate the return or future rewards. DQN is a variant of Q-learning. For more information on Q-learning, see Q-Learning Agents.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

DQN agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Continuous or discreteDiscrete

During training, the agent:

  • Updates the critic properties at each time step during learning.

  • Explores the action space using epsilon-greedy exploration. During each control interval the agent selects a random action with probability ϵ, otherwise it selects an action greedily with respect to the value function with probability 1-ϵ. This greedy action is the action for which the value function is greatest.

  • Stores past experience using a circular experience buffer. The agent updates the critic based on a mini-batch of experiences randomly sampled from the buffer.

Critic Function

To estimate the value function, a DQN agent maintains two function approximators:

  • Critic Q(S,A) — The critic takes observation S and action A as inputs and outputs the corresponding expectation of the long-term reward.

  • Target critic Q'(S,A) — To improve the stability of the optimization, the agent periodically updates the target critic based on the latest critic parameter values.

Both Q(S,A) and Q'(S,A) have the same structure and parameterization.

For more information on creating critics for value function approximation, see Create Policy and Value Function Representations.

When training is complete, the trained value function approximator is stored in critic Q(S,A).

Agent Creation

To create a DQN agent:

  1. Create a critic representation object.

  2. Specify agent options using the rlDQNAgentOptions function.

  3. Create the agent using the rlDQNAgent function.

For more information, see rlDQNAgent and rlDQNAgentOptions.

Training Algorithm

DQN agents use the following training algorithm, in which they update their critic model at each time step. To configure the training algorithm, specify options using rlDQNAgentOptions.

  • Initialize the critic Q(s,a) with random parameter values θQ, and initialize the target critic with the same values: θQ'=θQ.

  • For each training time step:

    1. For the current observation S, select a random action A with probability ϵ. Otherwise, select the action for which the critic value function is greatest.

      A=argmaxAQ(S,A|θQ)

      To specify ϵ and its decay rate, use the EpsilonGreedyExploration option.

    2. Execute action A. Observe the reward R and next observation S'.

    3. Store the experience (S,A,R,S') in the experience buffer.

    4. Sample a random mini-batch of M experiences (Si,Ai,Ri,S'i) from the experience buffer. To specify M, use the MiniBatchSize option.

    5. If S'i is a terminal state, set the value function target yi to Ri. Otherwise set it to:

      Amax=argmaxA'Q(Si',A'|θQ)yi=Ri+γQ'(Si',Amax|θQ')(doubleDQN)yi=Ri+γmaxA'Q'(Si',A'|θQ')(DQN)

      To set the discount factor γ, use the DiscountFactor option. To use double DQN, set the UseDoubleDQN option to true.

    6. Update the critic parameters by one-step minimization of the loss L across all sampled experiences.

      L=1Mi=1M(yiQ(Si,Ai|θQ))2

    7. Update the target critic depending on the target update method (smoothing or periodic). To select the update method, use the TargetUpdateMethod option.

      θQ'=τθQ+(1τ)θQ'(smoothing)θQ'=θQ(periodic)

      By default the agent uses target smoothing and updates the target critic at every time step using smoothing factor τ. To specify the smoothing factor, use the TargetSmoothFactor option. Alternatively, you can update the target critic periodically. To specify the number of episodes between target critic updates, use the TargetUpdateFrequency option.

References

[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing Atari With Deep Reinforcement Learning,” NIPS Deep Learning Workshop, 2013.

See Also

| |

Related Topics