TY - JOUR
T1 - Semisupervised learning of Hidden Markov models via a homotopy method
AU - Ji, Shihao
AU - Watson, Layne T.
AU - Carin, Lawrence
N1 - Generated from Scopus record by KAUST IRTS on 2021-02-09
PY - 2009/1/1
Y1 - 2009/1/1
N2 - Hidden Markov model (HMM) classifier design is considered for the analysis of sequential data, incorporating both labeled and unlabeled data for training; the balance between the use of labeled and unlabeled data is controlled by an allocation parameter λ ∈ (0, 1), where λ = 0 corresponds to purely supervised HMM learning (based only on the labeled data) and λ = 1 corresponds to unsupervised HMM-based clustering (based only on the unlabeled data). The associated estimation problem can typically be reduced to solving a set of fixed-point equations in the form of a "natural-parameter homotopy." This paper applies a homotopy method to track a continuous path of solutions, starting from a local supervised solution (λ = 0) to a local unsupervised solution (λ = 1). The homotopy method is guaranteed to track with probability one from λ = 0 to λ = 1 if the λ = 0 solution is unique; this condition is not satisfied for the HMM since the maximum likelihood supervised solution (λ = 0) is characterized by many local optima. A modified form of the homotopy map for HMMs assures a track from λ = 0 to λ = 1. Following this track leads to a formulation for selecting λ ∈ (0, 1) for a semisupervised solution and it also provides a tool for selection from among multiple local-optimal supervised solutions. The results of applying the proposed method to measured and synthetic sequential data verify its robustness and feasibility compared to the conventional EM approach for semisupervised HMM training. © 2009 IEEE.
AB - Hidden Markov model (HMM) classifier design is considered for the analysis of sequential data, incorporating both labeled and unlabeled data for training; the balance between the use of labeled and unlabeled data is controlled by an allocation parameter λ ∈ (0, 1), where λ = 0 corresponds to purely supervised HMM learning (based only on the labeled data) and λ = 1 corresponds to unsupervised HMM-based clustering (based only on the unlabeled data). The associated estimation problem can typically be reduced to solving a set of fixed-point equations in the form of a "natural-parameter homotopy." This paper applies a homotopy method to track a continuous path of solutions, starting from a local supervised solution (λ = 0) to a local unsupervised solution (λ = 1). The homotopy method is guaranteed to track with probability one from λ = 0 to λ = 1 if the λ = 0 solution is unique; this condition is not satisfied for the HMM since the maximum likelihood supervised solution (λ = 0) is characterized by many local optima. A modified form of the homotopy map for HMMs assures a track from λ = 0 to λ = 1. Following this track leads to a formulation for selecting λ ∈ (0, 1) for a semisupervised solution and it also provides a tool for selection from among multiple local-optimal supervised solutions. The results of applying the proposed method to measured and synthetic sequential data verify its robustness and feasibility compared to the conventional EM approach for semisupervised HMM training. © 2009 IEEE.
UR - http://ieeexplore.ieee.org/document/4479478/
UR - http://www.scopus.com/inward/record.url?scp=62249193672&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2008.71
DO - 10.1109/TPAMI.2008.71
M3 - Article
SN - 0162-8828
VL - 31
SP - 275
EP - 287
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 2
ER -