Gradient Free Parameter Estimation for Hidden Markov Models with Intractable Likelihoods

Elena Ehrlich, Ajay Jasra, Nikolas Kantas

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

In this article we focus on Maximum Likelihood estimation (MLE) for the static model parameters of hidden Markov models (HMMs). We will consider the case where one cannot or does not want to compute the conditional likelihood density of the observation given the hidden state because of increased computational complexity or analytical intractability. Instead we will assume that one may obtain samples from this conditional likelihood and hence use approximate Bayesian computation (ABC) approximations of the original HMM. Although these ABC approximations will induce a bias, this can be controlled to arbitrary precision via a positive parameter ϵ, so that the bias decreases with decreasing ϵ. We first establish that when using an ABC approximation of the HMM for a fixed batch of data, then the bias of the resulting log- marginal likelihood and its gradient is no worse than $\mathcal{O}(n\epsilon)$, where n is the total number of data-points. Therefore, when using gradient methods to perform MLE for the ABC approximation of the HMM, one may expect parameter estimates of reasonable accuracy. To compute an estimate of the unknown and fixed model parameters, we propose a gradient approach based on simultaneous perturbation stochastic approximation (SPSA) and Sequential Monte Carlo (SMC) for the ABC approximation of the HMM. The performance of this method is illustrated using two numerical examples.
Original languageEnglish (US)
JournalMethodology and Computing in Applied Probability
Volume17
Issue number2
DOIs
StatePublished - Jun 1 2015
Externally publishedYes

Fingerprint

Dive into the research topics of 'Gradient Free Parameter Estimation for Hidden Markov Models with Intractable Likelihoods'. Together they form a unique fingerprint.

Cite this