InverseNet: Benchmarking Operator Mismatch and Calibration Across Compressive Imaging Modalities
Deep learning models for compressive sensing lose all advantage when hardware assumptions are wrong.
Researchers Chengshuai Yang and Xin Yuan have published a critical new benchmark, InverseNet, that exposes a fundamental weakness in AI-powered compressive imaging systems. The study reveals that state-of-the-art models like EfficientSCI can catastrophically fail in real-world deployment, losing up to 20.58 dB in performance when their mathematically assumed 'forward operator'—the model of how light is captured—deviates from the physical hardware. This 'operator mismatch' is the default condition in deployed systems like CASSI, CACTI, and single-pixel cameras used in medical and scientific imaging. Until now, no benchmark quantified this problem, leaving a dangerous performance gap between lab results and field applications.
The InverseNet benchmark evaluates 12 methods across four scenarios (ideal, mismatched, oracle-corrected, blind calibration) and makes several stark findings. First, the performance advantage of deep learning methods over classical baselines completely disappears under mismatch. Second, architectures that ignore the physical mask ('mask-oblivious') recover 0% of the loss, while 'operator-conditioned' methods can recover 41-90%. Most promisingly, the research shows that blind calibration via grid-search can recover 85-100% of the ideal 'oracle' performance without needing ground truth data. These results, confirmed on real hardware, provide a crucial roadmap for building robust, deployable AI imaging systems that work outside the controlled lab environment.
- Deep learning models for compressive imaging lose 10-21 dB performance under real-world 'operator mismatch', erasing their advantage.
- The InverseNet benchmark spans 3 modalities (CASSI, CACTI, single-pixel) and tests 12 methods across 27 simulated and 9 real scenes.
- Blind calibration can recover 85-100% of ideal performance, offering a practical path to robust deployment without ground truth data.
Why It Matters
This exposes a critical reliability gap for AI in medical/scientific imaging, forcing a shift from lab benchmarks to real-world robustness.