Using predictive multiplicity to measure individual performance within the AI Act
A new legal loophole could force AI providers to report a shocking new metric.
A new paper argues the EU AI Act's requirement to report individual performance exposes 'predictive multiplicity'—where many equally accurate models make conflicting predictions for the same person. This arbitrariness in high-risk systems (like hiring or loans) may violate the Act. The authors propose new metrics, like individual conflict ratios, to quantify this disagreement and suggest providers must report it to deployers, fundamentally changing how model performance is evaluated and disclosed under regulation.
Why It Matters
This could force AI companies to audit and report model uncertainty for individuals, not just datasets, creating massive new compliance burdens.