Benchmarking Stopping Criteria for Evolutionary Multi-objective Optimization
New benchmarking method tackles the neglected issue of when to stop EMO algorithms...
Kenji Kitamura and Ryoji Tanabe have introduced a comprehensive benchmarking methodology for stopping criteria in evolutionary multi-objective optimization (EMO). Stopping criteria automatically determine when to halt an evolutionary algorithm to avoid wasting function evaluations on stagnant populations. Despite their importance in real-world applications, stopping criteria have received little attention in the EMO community, with few new developments in recent years. The authors identify the lack of effective benchmarking methodologies as a key reason for this stagnation.
To address this, the paper proposes three main contributions: (i) a performance measure that represents stopping criteria effectiveness as a single scalar value, simplifying comparisons; (ii) a file-based benchmarking approach that enhances reproducibility and simplifies the benchmarking process; and (iii) a data representation method to efficiently store population states in text files, mitigating file size issues. The team demonstrated their methodology by benchmarking five representative stopping criteria for EMO, showing the practical utility of their approach. This work, accepted at GECCO 2026, provides a foundational tool for advancing research in this critical area.
- Proposes a scalar performance measure for easy comparison of stopping criteria in EMO.
- Introduces a file-based benchmarking framework to enhance reproducibility and simplify testing.
- Validates the methodology by benchmarking five representative stopping criteria for EMO.
Why It Matters
This benchmarking framework could revitalize research on stopping criteria, improving efficiency in real-world EMO applications.