TY - JOUR

T1 - A Lagged Particle Filter for Stable Filtering of Certain High-Dimensional State-Space Models

AU - Ruzayqat, Hamza Mahmoud

AU - Er-raiy, Aimad

AU - Beskos, Alexandros

AU - Crisan, Dan

AU - Jasra, Ajay

AU - Kantas, Nikolas

N1 - KAUST Repository Item: Exported on 2022-09-30
Acknowledgements: The work of the first and fifth authors was supported by KAUST baseline funding. The work of the fourth author was partially supported by European Research Council (ERC) Synergy grant STUOD-DLV-8564. The work of the sixth author was supported by a J.P. Morgan A.I. Research Award. We thank two referees and the editor for their comments which have greatly improved the article.

PY - 2022/9/28

Y1 - 2022/9/28

N2 - We consider the problem of high-dimensional filtering of state-space models (SSMs) at discrete times. This problem is particularly challenging as analytical solutions are typically not available and many numerical approximation methods can have a cost that scales exponentially with the dimension of the hidden state. Inspired by lag-approximation methods for the smoothing problem [G. Kitagawa and S. Sato, Monte Carlo smoothing and self-organising state-space model, in Sequential Monte Carlo Methods in Practice, Springer, New York, 2001, pp. 178–195; J. Olsson et al., Bernoulli, 14 (2008), pp. 155–179], we introduce a lagged approximation of the smoothing distribution that is necessarily biased. For certain classes of SSMs, particularly those that forget the initial condition exponentially fast in time, the bias of our approximation is shown to be uniformly controlled in the dimension and exponentially small in time. We develop a sequential Monte Carlo (SMC) method to recursively estimate expectations with respect to our biased filtering distributions. Moreover, we prove for a class of SSMs that can contain dependencies amongst coordinates that as the dimension d→∞ the cost to achieve a stable mean square error in estimation, for classes of expectations, is of O(Nd2) per unit time, where N is the number of simulated samples in the SMC algorithm. Our methodology is implemented on several challenging high-dimensional examples including the conservative shallow-water model.

AB - We consider the problem of high-dimensional filtering of state-space models (SSMs) at discrete times. This problem is particularly challenging as analytical solutions are typically not available and many numerical approximation methods can have a cost that scales exponentially with the dimension of the hidden state. Inspired by lag-approximation methods for the smoothing problem [G. Kitagawa and S. Sato, Monte Carlo smoothing and self-organising state-space model, in Sequential Monte Carlo Methods in Practice, Springer, New York, 2001, pp. 178–195; J. Olsson et al., Bernoulli, 14 (2008), pp. 155–179], we introduce a lagged approximation of the smoothing distribution that is necessarily biased. For certain classes of SSMs, particularly those that forget the initial condition exponentially fast in time, the bias of our approximation is shown to be uniformly controlled in the dimension and exponentially small in time. We develop a sequential Monte Carlo (SMC) method to recursively estimate expectations with respect to our biased filtering distributions. Moreover, we prove for a class of SSMs that can contain dependencies amongst coordinates that as the dimension d→∞ the cost to achieve a stable mean square error in estimation, for classes of expectations, is of O(Nd2) per unit time, where N is the number of simulated samples in the SMC algorithm. Our methodology is implemented on several challenging high-dimensional examples including the conservative shallow-water model.

UR - http://hdl.handle.net/10754/672163

UR - https://epubs.siam.org/doi/10.1137/21M1450392

U2 - 10.1137/21m1450392

DO - 10.1137/21m1450392

M3 - Article

SN - 2166-2525

VL - 10

SP - 1130

EP - 1161

JO - SIAM/ASA Journal on Uncertainty Quantification

JF - SIAM/ASA Journal on Uncertainty Quantification

IS - 3

ER -