Safe stabilization control for interconnected virtual-real systems via model-based reinforcement learning

Published in 2024 14th Asian Control Conference (ASCC), 2024

In this paper, a safe-guarding controller is designed for the interconnected virtual-real system based on a reinforcement learning framework to achieve stabilization control. We established the mathematical formulation of the interconnected virtual-real system and the safety-guaranteed stabilization optimization problem. Online reinforcement learning methods are utilized to solve the Hamilton-Jacobi-Bellman(HJB) equation on the established optimal control problem. The safe-guarding term is introduced to achieve safe-guarding control for the real part. Single network is used to approximate the value function. Concurrent Learning methods are introduced to train the network without excitation risks. We prove that the dynamics of the estimation error of the designed critic network are uniform and ultimately bounded. Finally, a numerical simulation example is provided to illustrate the effectiveness of the proposed control method.

Download paper here

Recommended citation: Tan, Jun Kai and Xue, Shuang Si and Li, Huan and Cao, Hui and Li, Dong Yu (2024). Safe stabilization control for interconnected virtual-real systems via model-based reinforcement learning. 2024 14th Asian Control Conference (ASCC).
Download Paper