Algorithm selection is an area that increasingly attracts attention from researchers and practitioners from a variety of different backgrounds. After years of fruitful applications in a number of domains, there is a lot of data, but no standard format or repository for this data. This is in contrast to more established fields and makes it difficult to effectively share and compare different approaches. It also creates unnecessary obstacles for researchers new to algorithm selection to begin work in this area. We present a standardised format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. Leveraging this standardisation, we have run a set of example experiments that build and evaluate algorithm selection models through a common interface.
- Algorithm Selection Scenarios
- Data set description, Automated Exploratory Data Analysis (EDA), experimental results
- Format Specification
- Packages for ASlib:
- Scenario Check Tool (Python)
We have a new algorithm selection challenge this year (2017).
Since ASlib was designed to provide a fair comparison of different algorithm selectors, we run an on-going evaluation.
The results can be found here.
To be included in the evaluation, please send us the source code of your selector (to verify your results; no publication of source code on our end) and for each scenario, the results of the given cross validation in the format of a CSV file with two columns: problem instance x performance.
There was also a challenge on algorithm selection using the data here in 2015.
If you want to submit a new algorithm selection scenario, please open a pull request in the repository. Please use the Scenario Check Tool (Python) to check that your files have the correct format (Format Specification).
We also provide a very simple starter script to generate a ASlib scenarios from two csv files with runtime data and instance features.
If you use ASlib in your projects, please cite our AIJ paper:
Bischl, B. and Kerschke, P. and Kotthoff, L. and Lindauer, M. and Malitsky, Y. and Frechétte, A. and Hoos, H. and Hutter, F. and Leyton-Brown, K. and Tierney, K. and Vanschoren, J.
ASlib: A Benchmark Library for Algorithm Selection
In: Artificial Intelligence Journal (AIJ) (2016), pages 41–58