In a reinforcement learning scenario, where you are training an agent to complete task, the environment models the dynamics with which the agent interacts. As shown in the following figure, the environment:
Receives actions from the agent
Outputs observations in response to the actions
Generates a reward measuring how well the action contributes to achieving the task
Creating an environment model includes defining the following:
Action and observation signals that the agent uses to interact with the environment.
Reward signal that the agent uses to measure its success. For more information, see Define Reward Signals.
Environment dynamic behavior.
When you create an environment object, you must specify the action and observation
signals that the agent uses to interact with the environment. You can create both discrete
and continuous action spaces. For more information, see
What signals you select as actions and observations depends on your application. For example, for control system applications, the integrals (and sometimes derivatives) of error signals are often useful observations. Also, for reference-tracking applications, having a time-varying reference signal as an observation is helpful.
When you define your observation signals, ensure that all the system states are observable through the observations. For example, an image observation of a swinging pendulum has position information but does not have enough information to determine the pendulum velocity. In this case, you can specify the pendulum velocity as a separate observation.
Reinforcement Learning Toolbox™ software provides predefined MATLAB® environments for which the actions, observations, rewards, and dynamics are already defined. You can use these environments to:
Learn reinforcement learning concepts
Gain familiarity with Reinforcement Learning Toolbox software features
Test your own reinforcement learning agents
You can create the following types of custom MATLAB environments for your own applications:
Grid worlds with specified size, rewards, and obstacles
Environments with dynamics specified using custom functions
Environment specified by creating and modifying a template environment object
Once you create a custom environment object, you can train an agent in the same manner as in a predefined environment. For more information on training agents, see Train Reinforcement Learning Agents.
You can create custom grid worlds of any size with your own custom reward, state transition, and obstacle configurations. To create a custom grid world environment:
Create a grid world model using the
createGridWorld function. For example, create a grid world with ten rows
and nine columns.
gw = createGridWorld(10,9);
Configure the grid world by modifying the properties of the model. For example,
specify the terminal state as location
gw.TerminalStates = "[7,9]";
Create an MDP environment for this grid world, which the agent uses to interact with the grid world model.
env = rlMDPEnv(gw);
For simple environments, you can define a custom environment object by creating an
object, specifying your own custom reset and
At the beginning of each training episode, the agent prepares the environment for training by setting the initial conditions using the reset function. For example, you can specify known initial state values or place the environment into a random initial state.
The step function defines the dynamics of the environment; that is, how the state changes in response to agent actions. At each training time step, the state of the model is updated using the step function.
For more information, see Create MATLAB Environment using Custom Functions.
For more complex environments, you can define a custom environment by creating and modifying a template environment. To create a custom environment:
Create an environment template class using the
Modify the template environment, specifying environment properties, required environment functions, and optional environment functions.
Validate your custom environment using
For more information, see Create Custom MATLAB Environment from Template.