Post-hoc Provider Fairness Adaptation via Hierarchical Exposure Alignment
Say goodbye to retraining – PFA adjusts provider exposure with minimal accuracy loss
Current approaches to provider exposure fairness in recommender systems either require expensive retraining when fairness objectives change or rely on rigid post-hoc reranking with fixed criteria. Researchers from multiple institutions introduce Post-hoc Fairness Adaptation (PFA), a lightweight framework that equips a frozen recommender with a fairness adapter. The adapter learns personalized additive score adjustments from user-item embeddings, injecting them into original ranking scores to steer exposure toward a target fair distribution. To train the adapter, PFA minimizes KL divergence between actual and target exposure distributions. However, a global objective ignores structural disparities like imbalanced group sizes and heterogeneous exposure within groups.
To address this, PFA incorporates Hierarchical Exposure Fairness Alignment (HEFA), which explicitly balances inter- and intra-group provider exposure disparities, enabling flexible adaptation to diverse fairness requirements. To preserve ranking quality, PFA jointly optimizes HEFA with a differentiable NDCG loss, allowing end-to-end fairness optimization without sacrificing accuracy. Extensive experiments on three public datasets demonstrate that PFA achieves substantial fairness gains with negligible accuracy loss, consistently outperforming strong baselines. The method is particularly valuable for production systems where retraining is costly and fairness goals evolve over time.
- No retraining: PFA works with frozen backbone models, adding a lightweight adapter for flexible fairness control
- Hierarchical Exposure Fairness Alignment (HEFA) explicitly balances both inter-group and intra-group provider exposure disparities
- Joint optimization with differentiable NDCG loss preserves ranking quality while achieving significant fairness gains on three datasets
Why It Matters
Enables dynamic fairness adaptation in production recommenders without costly retraining or rigid reranking.