TY - JOUR
T1 - Learning Bayesian networks for discrete data
AU - Liang, Faming
AU - Zhang, Jian
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledged KAUST grant number(s): KUS-C1-016-04
Acknowledgements: Liang's research was supported in part by the grant (DMS-0607755) of the National Science Foundation and the award (KUS-C1-016-04) given by King Abdullah University of Science and Technology (KAUST). The authors thank Professor S.P. Azen, the associate editor, and the referee for their comments which have led to significant improvement of this paper.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2009/2
Y1 - 2009/2
N2 - Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly, it possesses the self-adjusting mechanism and thus avoids essentially the local-trap problem suffered by conventional MCMC simulation-based approaches in learning Bayesian networks. Secondly, it falls into the class of dynamic importance sampling algorithms; the network features can be inferred by dynamically weighted averaging the samples generated in the learning process, and the resulting estimates can have much lower variation than the single model-based estimates. The numerical results indicate that our approach can mix much faster over the space of Bayesian networks than the conventional MCMC simulation-based approaches. © 2008 Elsevier B.V. All rights reserved.
AB - Bayesian networks have received much attention in the recent literature. In this article, we propose an approach to learn Bayesian networks using the stochastic approximation Monte Carlo (SAMC) algorithm. Our approach has two nice features. Firstly, it possesses the self-adjusting mechanism and thus avoids essentially the local-trap problem suffered by conventional MCMC simulation-based approaches in learning Bayesian networks. Secondly, it falls into the class of dynamic importance sampling algorithms; the network features can be inferred by dynamically weighted averaging the samples generated in the learning process, and the resulting estimates can have much lower variation than the single model-based estimates. The numerical results indicate that our approach can mix much faster over the space of Bayesian networks than the conventional MCMC simulation-based approaches. © 2008 Elsevier B.V. All rights reserved.
UR - http://hdl.handle.net/10754/598713
UR - https://linkinghub.elsevier.com/retrieve/pii/S0167947308004805
UR - http://www.scopus.com/inward/record.url?scp=58549098419&partnerID=8YFLogxK
U2 - 10.1016/j.csda.2008.10.007
DO - 10.1016/j.csda.2008.10.007
M3 - Article
SN - 0167-9473
VL - 53
SP - 865
EP - 876
JO - Computational Statistics & Data Analysis
JF - Computational Statistics & Data Analysis
IS - 4
ER -