Abstract
We propose algorithms for learning Markov boundaries from data without having to learn a Bayesian network first. We study their correctness, scalability and data efficiency. The last two properties are important because we aim to apply the algorithms to identify the minimal set of features that is needed for probabilistic classification in databases with thousands of features but few instances, e.g. gene expression databases. We evaluate the algorithms on synthetic and real databases, including one with 139,351 features.
Original language | English (US) |
---|---|
Pages (from-to) | 211-232 |
Number of pages | 22 |
Journal | International Journal of Approximate Reasoning |
Volume | 45 |
Issue number | 2 |
DOIs | |
State | Published - Jul 2007 |
Externally published | Yes |
Keywords
- Bayesian networks
- Classification
- Feature subset selection
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Artificial Intelligence
- Applied Mathematics