Developer Tools

A Comparative Analysis of Backbone Algorithms for Configurable Software Systems

New study reveals optimal algorithms for analyzing configurable systems, cutting runtime by over 50%.

Deep Dive

A research team from Universidad Politécnica de Madrid has conducted the first comprehensive evaluation of backbone algorithms specifically for real-world software variability models. The study, analyzing 2,371 formulas from configurable systems with up to 186,059 variables and 527,240 clauses, reveals that these formulas are structurally distinct from typical SAT competition benchmarks—they have higher clause density but greater clause simplicity. This distinction explains why previous studies using competition formulas led to inconsistent performance conclusions.

The research provides practical guidelines for developers and tool builders working with configurable systems like product lines. For formulas with 1,000 or fewer variables, Algorithm 2/3 (iterative with solution filtering) is recommended, which is what current product line tools typically implement. For larger formulas, Algorithm 5 (chunked core-based) with adaptive chunk size selection offers the best performance, potentially reducing runtime by over 50% compared to Algorithm 2/3. However, the study identifies a key research gap: while Algorithm 5 can be highly efficient, its optimal chunk size varies unpredictably across formulas and cannot be reliably estimated in advance.

Finally, the findings challenge common assumptions about filtering heuristics, showing they have negligible or even negative effects on performance for variability models. This work establishes a new empirical foundation for algorithm selection in software engineering tools that handle feature models, configuration propagation, and dead code detection, moving beyond artificial benchmarks to real-world system complexity.

Key Points
  • First large-scale evaluation on 2,371 real-world variability model formulas, not artificial SAT benchmarks
  • Algorithm 5 with optimal chunking reduces runtime by over 50% for large formulas (>1,000 variables)
  • Identifies unpredictable optimal chunk size as key research gap for future optimization

Why It Matters

Provides evidence-based algorithm selection for software tools that manage complex configurable systems, improving efficiency in feature modeling and code analysis.