Abstract
In recent work we have developed an online algorithm selection technique, in which a model of algorithm performance is learned incrementally while being used. The resulting exploration-exploitation trade-off is solved as a bandit problem. The candidate solvers are run in parallel on a single machine, as an algorithm portfolio, and computation time is shared among them according to their expected performances. In this paper, we extend our technique to the more interesting and practical case of multiple CPUs. © 2009 Springer-Verlag Berlin Heidelberg.
Original language | English (US) |
---|---|
Title of host publication | Advances in Soft Computing |
Pages | 634-643 |
Number of pages | 10 |
DOIs | |
State | Published - Jan 9 2009 |
Externally published | Yes |
ASJC Scopus subject areas
- Computational Mechanics
- Computer Science (miscellaneous)
- Computer Science Applications