Clear Filters
Clear Filters

how to get continuous action data and store in Reinforcement learning

16 views (last 30 days)
I am working REINFORCEMENT LEARNING, , need to see data of action space , I can see reward, episodic Q0 value and average reward value for each episode, in the same awy I would like to see action space data for each episode?

Answers (1)

Alan
Alan on 31 May 2024 at 6:29
Hi Raja,
I’m assuming that you are observing the episode reward, episode Q0, and average reward from the Reinforcement Learning Designer app. Unfortunately, there are no options to plot your any other custom data (in your case action data) within the app. So, you will have to create a custom training loop that logs and plots the data you wish to see.
To start off, you can export the training code by choosing the “Generate MATLAB function for training” option from the drop down as shown below:
After saving the training function, you could export your agent from a drop down in a similar way as show below:
The modifications required to be made to plot action data lie in the generated train function. You could use the MonitorLogger object along with a custom callback that logs the required data. The logger can use different callbacks to collect data. In your use case, you want to plot action data after each episode. So, we can assign a callback to the EpisodeFinishedFcn property of the logger which collects action data after each episode. The following snippet demonstrates the same:
monitor = trainingProgressMonitor();
logger = rlDataLogger(monitor);
logger.EpisodeFinishedFcn = @episodeActionLogger;
You can then define the custom callback (I named it episodeActionLogger) as follows:
function dataToLog = episodeActionLogger(data)
if mod(data.AgentLearnCount, 2) == 0
dataToLog.ActionInfo = data.ActionInfo;
else
dataToLog = [];
end
end
After defining the logger, pass it on to the training function in the following manner:
info = train(agent,slEnv,opts,Logger=logger);
More details on MonitorLogger and this above mentioned technique of logging and plotting custom data can be viewed in the following documentation page: https://www.mathworks.com/help/reinforcement-learning/ref/rl.logging.monitorlogger.html
The following documentation might also be useful to customize your call to the train function: https://www.mathworks.com/help/reinforcement-learning/ug/train-reinforcement-learning-agents.html
Do make sure you are using a release later than R2022b to use MonitorLogger.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!