TY - JOUR
T1 - Data Assimilation in Chaotic Systems Using Deep Reinforcement Learning
AU - Hammoud, Mohamad Abed El Rahman
AU - Raboudi, Naila
AU - Titi, Edriss S.
AU - Knio, Omar
AU - Hoteit, Ibrahim
N1 - Publisher Copyright:
© 2024 The Author(s). Journal of Advances in Modeling Earth Systems published by Wiley Periodicals LLC on behalf of American Geophysical Union.
PY - 2024/8
Y1 - 2024/8
N2 - Data assimilation (DA) plays a pivotal role in diverse applications, ranging from weather forecasting to trajectory planning for autonomous vehicles. A prime example is the widely used ensemble Kalman filter (EnKF), which relies on the Kalman filter's linear update equation to correct each of the ensemble forecast member's state with incoming observations. Recent advancements have witnessed the emergence of deep learning approaches in this domain, primarily within a supervised learning framework. However, the adaptability of such models to untrained scenarios remains a challenge. In this study, we introduce a new DA strategy that utilizes reinforcement learning (RL) to apply state corrections using full or partial observations of the state variables. Our investigation focuses on demonstrating this approach to the chaotic Lorenz 63 and 96 systems, where the agent's objective is to maximize the geometric series with terms that are proportional to the negative root-mean-squared error (RMSE) between the observations and corresponding forecast states. Consequently, the agent develops a correction strategy, enhancing model forecasts based on available observations. Our strategy employs a stochastic action policy, enabling a Monte Carlo-based DA framework that relies on randomly sampling the policy to generate an ensemble of assimilated realizations. Numerical results demonstrate that the developed RL algorithm performs favorably when compared to the EnKF. Additionally, we illustrate the agent's capability to assimilate non-Gaussian observations, addressing one of the limitations of the EnKF.
AB - Data assimilation (DA) plays a pivotal role in diverse applications, ranging from weather forecasting to trajectory planning for autonomous vehicles. A prime example is the widely used ensemble Kalman filter (EnKF), which relies on the Kalman filter's linear update equation to correct each of the ensemble forecast member's state with incoming observations. Recent advancements have witnessed the emergence of deep learning approaches in this domain, primarily within a supervised learning framework. However, the adaptability of such models to untrained scenarios remains a challenge. In this study, we introduce a new DA strategy that utilizes reinforcement learning (RL) to apply state corrections using full or partial observations of the state variables. Our investigation focuses on demonstrating this approach to the chaotic Lorenz 63 and 96 systems, where the agent's objective is to maximize the geometric series with terms that are proportional to the negative root-mean-squared error (RMSE) between the observations and corresponding forecast states. Consequently, the agent develops a correction strategy, enhancing model forecasts based on available observations. Our strategy employs a stochastic action policy, enabling a Monte Carlo-based DA framework that relies on randomly sampling the policy to generate an ensemble of assimilated realizations. Numerical results demonstrate that the developed RL algorithm performs favorably when compared to the EnKF. Additionally, we illustrate the agent's capability to assimilate non-Gaussian observations, addressing one of the limitations of the EnKF.
KW - artificial intelligence
KW - chaos
KW - control
KW - data assimilation
KW - deep reinforcement learning
KW - Lorenz
UR - http://www.scopus.com/inward/record.url?scp=85200915814&partnerID=8YFLogxK
U2 - 10.1029/2023MS004178
DO - 10.1029/2023MS004178
M3 - Article
AN - SCOPUS:85200915814
SN - 1942-2466
VL - 16
JO - Journal of Advances in Modeling Earth Systems
JF - Journal of Advances in Modeling Earth Systems
IS - 8
M1 - e2023MS004178
ER -