TY - JOUR
T1 - Unbiased estimation of the gradient of the log-likelihood in inverse problems
AU - Jasra, Ajay
AU - Law, Kody J H
AU - Lu, Deng
N1 - KAUST Repository Item: Exported on 2021-03-26
Acknowledgements: AJ was supported by KAUST baseline funding. Some of this research was supported by Singapore MOE tier 1 grant R-155-000-182-114. KJHL was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. We thank two referees for their comments which have greatly improved the article.
PY - 2021/3/3
Y1 - 2021/3/3
N2 - We consider the problem of estimating a parameter θ∈Θ⊆Rdθ associated with a Bayesian inverse problem. Typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm is highly amenable to parallel computation.
AB - We consider the problem of estimating a parameter θ∈Θ⊆Rdθ associated with a Bayesian inverse problem. Typically one must resort to a numerical approximation of gradient of the log-likelihood and also adopt a discretization of the problem in space and/or time. We develop a new methodology to unbiasedly estimate the gradient of the log-likelihood with respect to the unknown parameter, i.e. the expectation of the estimate has no discretization bias. Such a property is not only useful for estimation in terms of the original stochastic model of interest, but can be used in stochastic gradient algorithms which benefit from unbiased estimates. Under appropriate assumptions, we prove that our estimator is not only unbiased but of finite variance. In addition, when implemented on a single processor, we show that the cost to achieve a given level of error is comparable to multilevel Monte Carlo methods, both practically and theoretically. However, the new algorithm is highly amenable to parallel computation.
UR - http://hdl.handle.net/10754/662314
UR - http://link.springer.com/10.1007/s11222-021-09994-6
UR - http://www.scopus.com/inward/record.url?scp=85102143323&partnerID=8YFLogxK
U2 - 10.1007/s11222-021-09994-6
DO - 10.1007/s11222-021-09994-6
M3 - Article
SN - 1573-1375
VL - 31
JO - Statistics and Computing
JF - Statistics and Computing
IS - 3
ER -