Hierarchical safe reinforcement learning control for leader-follower systems with prescribed performance

Published in IEEE Transactions on Automation Science and Engineering, 2025

This paper proposes a hierarchical safe reinforcement learning with prescribed performance control (HSRL-PPC) scheme to address the challenges of interconnected leader-follower systems operating in complex environments. The framework consists of two levels: at the higher level, the leader agent detects and avoids moving obstacles while planning optimal paths; at the lower level, the follower agent tracks the leader within strict prescribed performance bounds. We formulate the optimal prescribed performance safe control problem and solve it using the Hamilton-Jacobi-Bellman (HJB) equation. Due to system nonlinearity and obstacle complexity, we approximate the leader’s optimal value function using a state-following neural network that efficiently extrapolates training data to neighboring states, while employing a regular critic neural network for the follower’s value function approximation. Lyapunov stability analysis demonstrates the closed-loop system’s theoretical guarantees. Experimental results from two simulation examples and hardware tests with a quadcopter-vehicle system validate the effectiveness of the proposed approach in achieving safe navigation and precise tracking performance in dynamic environments.

Download paper here DOI

Recommended citation: Tan, Junkai and Xue, Shuangsi and Li, Huan and Guo, Zihang and Cao, Hui and Chen, Badong (2025). Hierarchical safe reinforcement learning control for leader-follower systems with prescribed performance. IEEE Transactions on Automation Science and Engineering.
Download Paper