Interpretable and Explainable Surrogate Modeling for Simulations: A State-of-the-Art Survey and Perspectives on Explainable AI for Decision-Making
Researchers propose a framework to bridge the gap between fast, black-box surrogate models and explainable AI for engineering decisions.
A new academic survey provides a crucial roadmap for merging two powerful but disconnected fields: surrogate modeling and Explainable AI (XAI). Surrogate models are lightweight AI approximations of complex, computationally expensive simulations used in engineering and science (like computational fluid dynamics or agent-based models). While they speed up analysis, they act as 'black boxes,' obscuring how input variables affect outputs. Conversely, XAI offers tools to unpack AI decisions but often fails to meet engineering-specific needs like handling highly correlated inputs or rigorous reliability standards. This survey, accepted for publication in Archives of Computational Methods in Engineering, systematically maps a broad spectrum of XAI techniques—such as feature importance and interaction detection—onto the various stages of a surrogate modeling workflow, from construction to exploration.
The authors ground their synthesis with applications across equation-based simulations and agent-based modeling, highlighting techniques that reveal variable interactions and support human comprehension. They identify pressing open challenges, including the explainability of dynamical systems and mixed-variable systems (combining categorical and numerical data). The paper concludes by proposing a research agenda to embed explainability as a core element of simulation-driven workflows. The ultimate goal is to empower practitioners in fields like aerospace, materials science, and climate modeling to move beyond simply accelerating simulations to extracting trustworthy, actionable insights that inform critical design and policy decisions.
- Bridges the gap between fast, opaque surrogate models and Explainable AI (XAI) techniques for engineering and scientific simulations.
- Identifies key challenges like handling dynamical systems and correlated inputs, where standard XAI methods often fail.
- Proposes a research agenda to make explainability a core, embedded feature of simulation workflows from model build to decision-making.
Why It Matters
Enables engineers and scientists to trust and understand AI-driven simulation results, leading to better, data-informed design and policy decisions.