MATLAB Answers


Problem running GPU Reinforcement Learning

Asked by Patrick Doran on 14 Nov 2019 at 6:49
Latest activity Edited by Walter Roberson
on 14 Nov 2019 at 7:54
I'm running a custom MATLAB reinforcement learning script. That I have been able to train a DDPG on my cpu laptop.
Now I'm trying to run the same script on a gpu computer, but I'm getting some dimensional errors.
I added these parameters to the training options
trainOpts.ParallelizationOptions.Mode = "async";
trainOpts.ParallelizationOptions.DataToSendFromWorkers = "experiences";
trainOpts.ParallelizationOptions.StepsUntilDataIsSent = 30;
trainOpts.ParallelizationOptions.WorkerRandomSeeds = -1;
Then I see these errors
Error using rl.agent.AbstractPolicy/getInitialAction (line 133)
Invalid observation type or size.
Error in rl.env.MATLABEnvironment/simLoop (line 235)
action = getInitialAction(policy,observation);
Error in rl.env.MATLABEnvironment/simWithPolicy (line 113)
[expcell{simCount},epinfo,siminfos{simCount}] = simLoop(env,policy,opts,simCount,usePCT);
Error in rl.train.parforTrain (line 62)
parfor i = 1:activeSims
Error in rl.train.TrainingManager/train (line 264)
Error in rl.train.TrainingManager/run (line 155)
Error in rl.agent.AbstractAgent/train (line 54)
TrainingStatistics = run(trainMgr);
Error in train_SSS_agent (line 25)
trainingStats = train(agent,env,trainingOptions);
Error in SSS_learning (line 9)
Caused by:
Error using rl.agent.AbstractPolicy/getAction (line 119)
Invalid observation type or size.
Error using rl.util.rlAbstractRepresentation/evaluate (line 242)
The dimensions of input data are not compatible with those of observation and action info respectively.
I don't have good(any) knowledge about gpu computing so there might be some fundemental setup steps I don't know about.

  1 Comment

UseParallel does not use a GPU: it uses additional processes, with all the problems that causes. For example, global variables are not transferred.

Sign in to comment.

0 Answers