Sequence-aware Large Language Models for Explainable Recommendation
New framework integrates user behavior sequences and utility-aware evaluation to generate better explanations.
A team of researchers has introduced a new AI framework designed to make recommendation systems not only more accurate but also genuinely understandable. The paper, titled "Sequence-aware Large Language Models for Explainable Recommendation," proposes SELLER (SEquence-aware LLM-based framework for Explainable Recommendation). The core innovation addresses a key weakness in current methods: most LLM-based recommenders generate explanations in a vacuum, ignoring the crucial sequential nature of user behavior—like how watching one movie leads to another. SELLER tackles this by integrating a dual-path encoder that captures both the chronological sequence of user actions and the semantic meaning of items.
To effectively bridge this complex data with an LLM's language capabilities, the researchers employ a Mixture-of-Experts (MoE) adapter. This component acts as a sophisticated translator, aligning the encoded behavioral and semantic signals for the model to process. Crucially, SELLER also pioneers a unified evaluation framework that moves beyond standard text-quality metrics. It assesses explanations based on both their linguistic coherence and, more importantly, their tangible effect on the final recommendation's utility and user satisfaction. According to the authors, experiments on public benchmarks demonstrate that SELLER consistently outperforms previous state-of-the-art methods, marking a significant step toward AI systems that can clearly articulate the 'why' behind their suggestions.
- Proposes SELLER framework with a dual-path encoder capturing user behavior sequences and item semantics.
- Uses a Mixture-of-Experts adapter to align behavioral data with LLMs for coherent explanation generation.
- Introduces a unified utility-aware evaluation assessing both text quality and impact on recommendation outcomes.
Why It Matters
Moves AI recommendations from opaque black boxes to transparent systems users can understand and trust, improving adoption.