Abstract
Algorithm selection, algorithm portfolios, and randomized restarts, can profit from a probabilistic model of algorithm run-time, to be estimated from data gathered by solving a set of experiments. Censored sampling offers a principled way of reducing this initial training time. We study the trade-off between training time and model precision by varying the censoring threshold, and analyzing the consequent impact on the performance of an optimal restart strategy, based on an estimated model of runtime distribution. We present experiments with a SAT solver on a graph-coloring benchmark. Due to the "heavy-tailed" runtime distribution, a modest censoring can already reduce training time by a few orders of magnitudes. The nature of the optimization process underlying the restart strategy renders its performance surprisingly robust, also to more aggressive censoring. © Springer-Verlag Berlin Heidelberg 2006.
Original language | English (US) |
---|---|
Title of host publication | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Publisher | Springer Verlag |
Pages | 167-181 |
Number of pages | 15 |
ISBN (Print) | 3540462678 |
DOIs | |
State | Published - Jan 1 2006 |
Externally published | Yes |