Aligning Recommendations with User Popularity Preferences
New AI technique adapts to each user's taste for popular or niche content, improving alignment by up to 40%.
A team of researchers, including Mona Schirmer and six others, has published a paper at FAccT 2026 tackling the pervasive issue of popularity bias in AI recommender systems. Their work introduces a two-part solution: a measurement framework called Popularity Quantile Calibration, which quantifies the misalignment between a user's historical preference for popular or niche items and what the system actually recommends, and a novel mitigation method named SPREE.
SPREE (Sequential Popularity Recommendation via Activation Steering) operates at inference time for sequential recommenders. It identifies a "popularity direction" within the model's representation space and then uses a technique called activation steering to dynamically adjust recommendations. Crucially, SPREE personalizes this intervention, varying both the direction (toward more or less popular items) and the magnitude of the steering based on an estimate of each individual user's personal popularity bias. This user-level approach aims for precise alignment rather than applying a uniform, global reduction in popularity, which can harm overall recommendation quality.
Experiments conducted across multiple datasets demonstrate that SPREE consistently improves user-level popularity alignment. The method successfully shifts recommendations to better match individual preferences without degrading standard quality metrics, offering a more nuanced and effective tool than previous debiasing techniques. This research provides a concrete framework and tool for developers aiming to build fairer, more personalized, and less homogenized recommendation engines.
- Introduces Popularity Quantile Calibration, a framework to measure misalignment between user preferences and recommendation popularity.
- Proposes SPREE, a personalized inference-time method using activation steering to adapt recommendations per user.
- Proven to improve alignment while preserving recommendation quality across multiple datasets, unlike blunt global debiasing.
Why It Matters
Enables platforms to move beyond a homogenized 'rich-get-richer' feed, delivering truly personalized and diverse content to users.