Accurate and Reliable Uncertainty Estimates for Deterministic Predictions Extensions to Under and Overpredictions
New neural network method captures skewed errors and heavy tails, outperforming restrictive Gaussian assumptions.
A team of researchers including Rileigh Bandy, Enrico Camporeale, and Andong Hu has published a significant extension to the ACCRUE framework for quantifying uncertainty in deterministic computational models. These models are critical for high-stakes decisions in fields like engineering and finance, where understanding prediction confidence is as important as the prediction itself. The new work addresses key limitations of existing methods: sampling-based approaches are too slow for real-time use, while many uncertainty representations either ignore how uncertainty changes with different inputs or rely on overly simplistic Gaussian (normal) distributions that fail to model real-world skewed or extreme errors.
The proposed solution trains a neural network to learn flexible, input-dependent uncertainty distributions, specifically the two-piece Gaussian and asymmetric Laplace forms. It uses a novel loss function that balances predictive accuracy with statistical reliability. In experiments on both synthetic and real-world data, this extended ACCRUE framework successfully captured complex, input-dependent uncertainty structures and delivered improved probabilistic forecasts compared to prior techniques. Crucially, it maintains the computational efficiency needed for practical applications while providing a more honest and complete picture of potential error, including when a model is likely to under-predict or over-predict.
- Extends the ACCRUE framework to model non-Gaussian uncertainty using two-piece Gaussian and asymmetric Laplace distributions via a neural network.
- Solves the computational bottleneck of sampling methods and the inflexibility of Gaussian assumptions for real-time, high-stakes decision models.
- Demonstrates improved probabilistic forecast accuracy in experiments by capturing input-dependent, skewed, and heavy-tailed error behavior.
Why It Matters
Enables safer deployment of AI in critical systems like engineering and finance by providing more trustworthy and complete uncertainty estimates.