Generalized Heavy-tailed Mutation for Evolutionary Algorithms
A new mutation operator achieves O(n) optimization time, beating static-rate algorithms on benchmark problems.
A team of researchers including Anton Eremeev, Dmitri Silaev, and Valentin Topchii has published a significant theoretical advance for evolutionary algorithms (EAs) titled 'Generalized Heavy-tailed Mutation for Evolutionary Algorithms.' The work builds upon the established 'heavy-tailed mutation' operator introduced in 2017, which uses a power-law distribution to determine mutation rates. The new contribution generalizes this concept by replacing the strict power-law assumption with a broader mathematical condition known as 'regularly varying' distributions. This relaxation provides a more flexible theoretical framework for analyzing and designing mutation operators.
The core finding is that this generalized operator, when applied to the (1+(λ,λ)) Genetic Algorithm, can still guarantee an expected optimization time of O(n) for the classic OneMax benchmark problem. This linear-time performance is known to be asymptotically superior to what can be achieved by the same algorithm using any static, non-adaptive mutation rate. The researchers not only proved these generalized theoretical bounds but also proposed a concrete new version of the mutation operator that satisfies their conditions and presented promising preliminary computational experiments. The paper, which originated from a 2024 Russian publication and was presented at the MathAI-2026 conference, formalizes a pathway to more robust and theoretically sound adaptive mutation strategies in evolutionary computation.
- Generalizes the 2017 'heavy-tailed mutation' concept by using a 'regularly varying' constraint instead of a strict power-law.
- Proves the (1+(λ,λ)) GA with the new operator achieves O(n) expected runtime on OneMax, beating any static mutation rate.
- Provides a concrete new operator and shows promising experimental results, offering a more flexible design framework for EAs.
Why It Matters
Provides a stronger theoretical foundation for designing faster, more adaptive AI optimization algorithms used in engineering and design.