Fairboard: a quantitative framework for equity assessment of healthcare models
Study finds patient identity explains more performance variance than model choice in 11,664 AI inferences.
A research team from University College London has developed Fairboard, a comprehensive quantitative framework designed to systematically assess equity in healthcare AI models. The study, published on arXiv, analyzed 18 open-source brain tumor segmentation models across 648 glioma patients from two independent datasets, generating 11,664 model inferences. The researchers evaluated equity across four distinct dimensions: univariate, Bayesian multivariate, spatial, and representational analysis. Their findings reveal that patient identity consistently explains more performance variance than model choice, with clinical factors like molecular diagnosis, tumor grade, and extent of resection predicting segmentation accuracy more strongly than model architecture.
A voxel-wise spatial meta-analysis identified neuroanatomically localized biases that are compartment-specific yet often consistent across different models. Within a high-dimensional latent space of lesion masks and clinic-demographic features, model performance clusters significantly, indicating that the patient feature space contains specific axes of algorithmic vulnerability. While newer models showed a tendency toward greater equity, none provided formal fairness guarantees. The team has released Fairboard as an open-source, no-code dashboard that lowers technical barriers for equitable model monitoring in medical imaging, addressing the critical gap where formal equity assessments remain rare despite over 1,000 FDA-authorized AI medical devices.
- Analyzed 18 brain tumor segmentation models across 648 patients (11,664 inferences) finding patient factors outweigh model architecture in performance variance
- Identified neuroanatomically localized biases consistent across models through voxel-wise spatial meta-analysis
- Released Fairboard as open-source, no-code dashboard for accessible equity monitoring in medical AI
Why It Matters
Provides concrete tools to audit healthcare AI for dangerous biases before clinical deployment, addressing a critical regulatory gap.