Q-Learning-Based Robust Control for Nonlinear Systems With Mismatched Perturbations.

 0 Người đánh giá. Xếp hạng trung bình 0

Tác giả: Qian Cui, Gang Feng, Xuesong Xu

Ngôn ngữ: eng

Ký hiệu phân loại: 133.594 Types or schools of astrology originating in or associated with a

Thông tin xuất bản: United States : IEEE transactions on neural networks and learning systems , 2025

Mô tả vật lý:

Bộ sưu tập: NCBI

ID: 707167

This brief presents a novel optimal control (OC) approach based on Q-learning to address robust control challenges for uncertain nonlinear systems subject to mismatched perturbations. Unlike conventional methodologies that solve the robust control problem directly, our approach reformulates the problem by minimizing a value function that integrates perturbation information. The Q-function is subsequently constructed by coupling the optimal value function with the Hamiltonian function. To estimate the parameters of the Q-function, an integral reinforcement learning (IRL) technique is employed to develop a critic neural network (NN). Leveraging this parameterized Q-function, we derive a model-free OC solution that generalizes the model-based formulation. Furthermore, using Lyapunov's direct method, the resulting closed-loop system is guaranteed to have uniform ultimate bounded stability. A case study is presented to showcase the effectiveness and applicability of the proposed approach.
Tạo bộ sưu tập với mã QR

THƯ VIỆN - TRƯỜNG ĐẠI HỌC CÔNG NGHỆ TP.HCM

ĐT: (028) 36225755 | Email: tt.thuvien@hutech.edu.vn

Copyright @2024 THƯ VIỆN HUTECH