A Consistency-Centric Approach to Set-Based Optimization with Multiple Models of Unranked Fidelity
New method finds robust solutions by maximizing consistency, not a single best model.
Traditional optimization often assumes one model is the most accurate, but real-world systems have multiple models of varying fidelity. Morey et al. introduce S-BOMM (Set-Based Optimization with Multiple Models), which forgoes ranking models by accuracy. Instead, it finds a set of solutions that are consistent across all models, reducing reliance on a potentially flawed single reference. This is particularly useful in engineering, finance, and AI where multiple simulators or data sources exist.
The authors provide a probabilistic analysis bounding the risk of incorrect solutions, and empirical results on test problems show S-BOMM effectively balances exploration and exploitation. By treating models as equally valid and focusing on their agreement, S-BOMM opens the door to more robust optimization pipelines in machine learning and control, especially when ground truth is unavailable.
- S-BOMM works with multiple models without assuming a single highest-fidelity model.
- Uses consistency between models to identify a set of robust solutions.
- Provides probabilistic bounds on the methodology's correctness.
Why It Matters
Enables robust optimization in complex systems where model accuracy is unknown, reducing risk from model misspecification.