Improving Nonlinear Model Predictive Control Laws via Implicit Q-Learning

Khalid Alhazmi*, S. Mani Sarathy*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

This paper presents an implicit Q-Learning scheme to improve the performance of nonlinear model predictive control laws while providing a stability guarantee. The control space of this learning-based method is restricted to the admissible control set of a Lyapunov-based nonlinear model predictive controller. The effectiveness of this method is demonstrated on a highly nonlinear chemical process system with practical significance. It is shown that learning-based controller derived with this method improves the performance of a sub-optimal baseline controller beyond what is possible by supervised learning approximation approaches. This scheme offers a promising new paradigm for improving model-based controllers that deteriorate due to the dynamic process changes typically encountered in real-world systems.

Original languageEnglish (US)
Title of host publicationIFAC-PapersOnLine
EditorsHideaki Ishii, Yoshio Ebihara, Jun-ichi Imura, Masaki Yamakita
PublisherElsevier B.V.
Pages10027-10032
Number of pages6
Edition2
ISBN (Electronic)9781713872344
DOIs
StatePublished - Jul 1 2023
Event22nd IFAC World Congress - Yokohama, Japan
Duration: Jul 9 2023Jul 14 2023

Publication series

NameIFAC-PapersOnLine
Number2
Volume56
ISSN (Electronic)2405-8963

Conference

Conference22nd IFAC World Congress
Country/TerritoryJapan
CityYokohama
Period07/9/2307/14/23

Keywords

  • chemical reactions
  • deep learning
  • Model predictive control
  • process control
  • reinforcement learning

ASJC Scopus subject areas

  • Control and Systems Engineering

Fingerprint

Dive into the research topics of 'Improving Nonlinear Model Predictive Control Laws via Implicit Q-Learning'. Together they form a unique fingerprint.

Cite this