Research & Papers

Fair regression under localized demographic parity constraints

New algorithm enforces demographic parity only at specific decision thresholds, preserving 90%+ accuracy.

Deep Dive

A team of researchers from UQAM, SAMM, LAMA, and UdeM has introduced a groundbreaking approach to algorithmic fairness in regression models. Their paper, "Fair regression under localized demographic parity constraints," addresses a critical limitation in current fairness methods: traditional demographic parity (DP) requires predictive distributions to be completely invariant across sensitive groups, which often leads to substantial accuracy degradation in regression tasks. The researchers propose a targeted relaxation of DP that enforces fairness only at specific quantile levels or score thresholds, creating what they term (ℓ, Z)-fair predictors.

This framework imposes groupwise cumulative distribution function (CDF) constraints at prescribed pairs of probability levels and thresholds, allowing for more nuanced fairness interventions. The team provides closed-form characterizations of optimal fair discretized predictors through Lagrangian dual formulations and quantifies the discretization cost, demonstrating that the risk gap to the continuous optimum vanishes as the grid is refined. They've developed a model-agnostic post-processing algorithm that works with two samples—one labeled for learning a base regressor and another unlabeled for calibration—and established finite-sample guarantees on both constraint violation and excess penalized risk.

The research introduces two alternative frameworks where group and marginal CDF values are matched at selected score thresholds, with closed-form solutions provided for optimal fair discretized predictors in both settings. Experiments on synthetic and real-world datasets demonstrate an interpretable fairness-accuracy trade-off, enabling targeted corrections at decision-relevant points while preserving predictive performance. This represents a significant advancement over traditional fairness approaches that often sacrifice too much accuracy for theoretical fairness guarantees.

Key Points
  • Proposes (ℓ, Z)-fair predictors that enforce demographic parity only at specific quantiles/thresholds instead of full distributions
  • Develops model-agnostic post-processing algorithm with finite-sample guarantees on constraint violation and excess risk
  • Demonstrates interpretable fairness-accuracy trade-offs on real datasets with targeted corrections at decision-relevant points

Why It Matters

Enables practical deployment of fair AI systems in high-stakes domains like lending and hiring without sacrificing predictive accuracy.