This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.


Value table or Q table


You can create value tables and Q tables to represent critic networks for reinforcement learning. Value tables store rewards for a finite set of observations. Q tables store rewards for corresponding finite observation-action pairs.

To create a value function representation using an rlTable object, use the rlRepresentation function.



T = rlTable(obsinfo)
T = rlTable(obsinfo,actinfo)


T = rlTable(obsinfo) creates a value table for the given discrete observations.

T = rlTable(obsinfo,actinfo) creates a Q table for the given discrete observations and actions.

Input Arguments

expand all

Observation specification, specified as an rlFiniteSetSpec object.

Action specification, specified as an rlFiniteSetSpec object.


expand all

Reward table, returned as an array. When Table is a:

  • Value table, it contains NO rows, where NO is the number of finite observation values.

  • Q table, it contains NO rows and NA columns, where NA is the number of possible finite actions.

Object Functions

rlRepresentationModel representation for reinforcement learning agents

Introduced in R2019a