Formulation of Reward function for controlling of Asynchronous Motor

4 views (last 30 days)
I would like to simulate a field oriented control for an Asychnrous motor with Reinforcement Learining.For this I am following the steps expained in the given links https://de.mathworks.com/help/mcb/gs/foc-of-pmsm-using-reinforcement-learning.html , https://de.mathworks.com/videos/reinforcement-learning-for-field-oriented-control-of-a-permanent-magnet-synchronous-motor-1587727861081.html. For the Asynchrnous motor, is it okay to use the same reward function , critic and actor network or do I need to change them.

Answers (1)

Prasanna
Prasanna on 3 Oct 2024
Hi Christy,
The general approach for using RL with FOC is like that for a PMSM, however there are some key differences between asynchronous motors and PMSMs which require modifications to the RL components. Some considerations and for using reinforcement learning with asynchronous motors include:
  • Motor Dynamics: the dynamics of an async motor is different. For example, the rotor flux in an induction motor is not directly controllable which can affect how the control strategy is implemented.
  • Reward Function: the new reward function should include terms that address the slip and rotor flux linkage, minimizing torque ripple, maintaining desired speed, etc.
  • Critic and actor networks:
  • While the architecture of the critic and actor networks might remain similar, the input features and output actions may need adjustment to better suit the control objectives and dynamics of an asynchronous motor.
Given these differences, you can start with the existing setup and then iteratively adjust the reward function, critic, and actor networks based on the performance and specific requirements of the asynchronous motor. For more information, you can refer the following documentations:
Hope this helps!

Products


Release

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!