TY - GEN
T1 - Logarithmic regret bound in partially observable linear dynamical systems
AU - Lale, Sahin
AU - Azizzadenesheli, Kamyar
AU - Hassibi, Babak
AU - Anandkumar, Anima
N1 - KAUST Repository Item: Exported on 2022-07-01
Acknowledgements: S. Lale is supported in part by DARPA PAI. K. Azizzadenesheli gratefully acknowledge the financial support of Raytheon and Amazon Web Services. B. Hassibi is supported in part by the National Science Foundation under grants CNS-0932428, CCF-1018927, CCF-1423663 and CCF-1409204, by a grant from Qualcomm Inc., by NASA’s Jet Propulsion Laboratory through the President and Director’s Fund, and by King Abdullah University of Science and Technology. A. Anandkumar is supported in part by Bren endowed chair, DARPA PAIHR00111890035 and LwLL grants, Raytheon, Microsoft, Google, and Adobe faculty fellowships.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - We study the problem of system identification and adaptive control in partially observable linear dynamical systems. Adaptive and closed-loop system identification is a challenging problem due to correlations introduced in data collection. In this paper, we present the first model estimation method with finite-time guarantees in both open and closed-loop system identification. Deploying this estimation method, we propose adaptive control online learning (ADAPTON), an efficient reinforcement learning algorithm that adaptively learns the system dynamics and continuously updates its controller through online learning steps. ADAPTON estimates the model dynamics by occasionally solving a linear regression problem through interactions with the environment. Using policy re-parameterization and the estimated model, ADAPTON constructs counterfactual loss functions to be used for updating the controller through online gradient descent. Over time, ADAPTON improves its model estimates and obtains more accurate gradient updates to improve the controller. We show that ADAPTON achieves a regret upper bound of polylog (T), after T time steps of agent-environment interaction. To the best of our knowledge, ADAPTON is the first algorithm that achieves polylog (T) regret in adaptive control of unknown partially observable linear dynamical systems which includes linear quadratic Gaussian (LQG) control.
AB - We study the problem of system identification and adaptive control in partially observable linear dynamical systems. Adaptive and closed-loop system identification is a challenging problem due to correlations introduced in data collection. In this paper, we present the first model estimation method with finite-time guarantees in both open and closed-loop system identification. Deploying this estimation method, we propose adaptive control online learning (ADAPTON), an efficient reinforcement learning algorithm that adaptively learns the system dynamics and continuously updates its controller through online learning steps. ADAPTON estimates the model dynamics by occasionally solving a linear regression problem through interactions with the environment. Using policy re-parameterization and the estimated model, ADAPTON constructs counterfactual loss functions to be used for updating the controller through online gradient descent. Over time, ADAPTON improves its model estimates and obtains more accurate gradient updates to improve the controller. We show that ADAPTON achieves a regret upper bound of polylog (T), after T time steps of agent-environment interaction. To the best of our knowledge, ADAPTON is the first algorithm that achieves polylog (T) regret in adaptive control of unknown partially observable linear dynamical systems which includes linear quadratic Gaussian (LQG) control.
UR - http://hdl.handle.net/10754/662418
UR - https://proceedings.neurips.cc/paper/2020/file/ef8b5fcc338e003145ac9c134754db71-Paper.pdf
UR - http://www.scopus.com/inward/record.url?scp=85108428647&partnerID=8YFLogxK
M3 - Conference contribution
BT - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
PB - Neural information processing systems foundation
ER -