Facing size error while using Reinforcement learning

2 views (last 30 days)
This is my code :-
% opening rcam_reinforcement_test1.slx simulink model
mdl='rcam_reinforcement_test1';
open_system(mdl)
% creating env
%open_system([mdl '/env+plant'])
obsInfo=rlNumericSpec([12 1]);
obsInfo.Name="observations";
obsInfo.Description='velx ,vely ,velz ,rollrate ,pitchrate ,yawrate ,bankangle ,pitchangle ,yawangle';
numObservations = obsInfo.Dimension();
actInfo=rlNumericSpec([3,1],...
'LowerLimit',[-25*(pi/180) -25*(pi/180) -30*(pi/180)]',...
'UpperLimit',[25*(pi/180) 25*(pi/180) 30*(pi/180)]');
actInfo.Name="controldeflection";
actInfo.Description='aileron ,tail ,ruddder';
numActions = actInfo.Dimension();
env = rlSimulinkEnv('rcam_reinforcement_test1','rcam_reinforcement_test1/RL Agent',obsInfo,actInfo);
%env.ResetFcn = @(in)localResetFcn(in);
Ts = 1.0;
Tf = 20000;
rng(0)
%creating critic layer
actionLayers = [
featureInputLayer(numActions,"Name","Action")
fullyConnectedLayer(15,"Name","hidden_act_1")
reluLayer("Name","relu_3")
fullyConnectedLayer(1,"Name","hidden_act_2")
];
stateLayers = [
featureInputLayer(numObservations,"Name","State")
fullyConnectedLayer(60,"Name","hidden_first")
reluLayer("Name","relu_1")
fullyConnectedLayer(60,"Name","hidden_second")
reluLayer("Name","relu_2")
fullyConnectedLayer(1,"Name","hidden_third")];
commonLayers = [
additionLayer(2,"Name","addition")
reluLayer("Name","relu_4")
fullyConnectedLayer(1,"Name","out_critic")];
criticNetwork = layerGraph();
criticNetwork =addLayers(criticNetwork,actionLayers);
criticNetwork =addLayers(criticNetwork,stateLayers);
criticNetwork =addLayers(criticNetwork,commonLayers);
criticNetwork = connectLayers(criticNetwork,"hidden_act_2","addition/in2");
criticNetwork = connectLayers(criticNetwork,"hidden_third","addition/in1");
% plot(criticNetwork);
% creating critic option
criticOpts = rlRepresentationOptions('LearnRate',1e-03,'GradientThreshold',1);
% creating critic representation
critic = rlQValueRepresentation(criticNetwork,obsInfo,actInfo,'Observation',{'State'},'Action',{'Action'},criticOpts);
% creating action network
actNetwork=[
featureInputLayer(numObservations,"Name","Action")
fullyConnectedLayer(60,"Name","hidden_layer_first")
reluLayer("Name","relu1")
fullyConnectedLayer(36,"Name","hidden_layer_second")
reluLayer("Name","relu2")
fullyConnectedLayer(numActions,"Name","out_action")];
% creating actor option
actorOpts = rlRepresentationOptions('LearnRate',1e-03,'GradientThreshold',1);
% creating critic representation
actor = rlDeterministicActorRepresentation(actNetwork,obsInfo,actInfo,'Observation',{'Action'},'Action',{'out_action'},actorOpts);
agentOpts = rlDDPGAgentOptions(...
'SampleTime',Ts,...
'TargetSmoothFactor',1e-3,...
'DiscountFactor',1.0, ...
'MiniBatchSize',64, ...
'ExperienceBufferLength',1e6);
%agentOpts.ExplorationModel.StandardDeviation = [0.3 0.3 0.3];
%agentOpts.ExplorationModel.StandardDeviationDecayRate = 1e-5;
agent = rlDDPGAgent(actor,critic,agentOpts);
% training conf
maxepisodes = 5000;
maxsteps = ceil(Tf/Ts);
trainOpts = rlTrainingOptions(...
'MaxEpisodes',maxepisodes, ...
'MaxStepsPerEpisode',maxsteps, ...
'ScoreAveragingWindowLength',20, ...
'Verbose',false, ...
'Plots','training-progress',...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',9.3123e+09);
doTraining = true;
if doTraining
% Train the agent.
trainingStats = train(agent,env,trainOpts);
else
% Load the pretrained agent for the example.
%load('WaterTankDDPG.mat','agent')
end
% validating by sim in env
simOpts = rlSimulationOptions('MaxSteps',maxsteps,'StopOnError','on');
experiences = sim(env,agent,simOpts);
######################################################################################################################
This the error that i am getting :-
Error using rl.env.AbstractEnv/simWithPolicy (line 83)
Unable to simulate model 'rcam_reinforcement_test1' with the agent 'agent'.
Error in rl.task.SeriesTrainTask/runImpl (line 33)
[varargout{1},varargout{2}] = simWithPolicy(this.Env,this.Agent,simOpts);
Error in rl.task.Task/run (line 21)
[varargout{1:nargout}] = runImpl(this);
Error in rl.task.TaskSpec/internal_run (line 166)
[varargout{1:nargout}] = run(task);
Error in rl.task.TaskSpec/runDirect (line 170)
[this.Outputs{1:getNumOutputs(this)}] = internal_run(this);
Error in rl.task.TaskSpec/runScalarTask (line 194)
runDirect(this);
Error in rl.task.TaskSpec/run (line 69)
runScalarTask(task);
Error in rl.train.SeriesTrainer/run (line 24)
run(seriestaskspec);
Error in rl.train.TrainingManager/train (line 424)
run(trainer);
Error in rl.train.TrainingManager/run (line 215)
train(this);
Error in rl.agent.AbstractAgent/train (line 77)
TrainingStatistics = run(trainMgr);
Error in rlagent_glider (line 111)
trainingStats = train(agent,env,trainOpts);
Caused by:
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667)
Invalid input argument type or size such as observation, reward, isdone or loggedSignals.
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667)
Unable to compute gradient from representation.
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 667)
Number of elements must not change. Use [] as one of the size inputs to automatically calculate the appropriate size
for that dimension.
#############################################################################################################
I am not able to figure out what is causing the error

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!