Research & Papers

On the Influence of the Feature Computation Budget on Per-Instance Algorithm Selection for Black-Box Optimization

Spending up to 25% of your budget on features can still make algorithm selection worthwhile.

Deep Dive

Per-instance algorithm selection (PIAS) aims to choose the best optimizer for a given black-box problem based on computed features. However, those features cost part of the optimization budget—raising two key questions: when does the trade-off become worthwhile, and what fraction maximizes performance? To answer them, van der Blom and Vermetten ran a comprehensive study spanning 2 portfolio sizes, 3 problem sets, 4 dimensionalities, and 10 target budgets. They compared PIAS with varying feature sampling budgets against the single best algorithm across each scenario.

The results show PIAS remains viable even when up to a quarter of the total budget is consumed by feature computation. The optimal fraction of budget allocated to features is highly scenario-dependent, but on average, 20% of PIAS's performance loss relative to the virtual best solver is attributable to the feature budget itself. This underscores the importance of properly accounting for feature costs in any real-world PIAS deployment. The paper is available on arXiv (2605.04954) and targets both evolutionary computing and machine learning audiences.

Key Points
  • PIAS remains viable even when 25% of total optimization budget is spent on feature computation.
  • Optimal feature budget fraction is highly scenario-dependent, varying with portfolio size, problem set, dimensionality, and target budget.
  • On average, 20% of PIAS performance loss relative to the virtual best solver is explained by the feature computation budget.

Why It Matters

Optimizers can now budget features intelligently, making algorithm selection practical for real-world black-box problems.