EviSnap: Faithful Evidence-Cited Explanations for Cold-Start Cross-Domain Recommendation
New framework uses LLMs to distill reviews into 'facet cards' for transparent, auditable recommendations.
Researchers Yingjun Dai and Ahmed El-Roby have introduced EviSnap, a novel framework designed to solve the 'black box' problem in cold-start cross-domain recommendation (CDR) systems. These systems aim to predict a user's preferences in a new domain (like Movies) based solely on their behavior in a source domain (like Books). Existing models either rely on opaque embedding transfers or generate post-hoc rationales that are hard to verify. EviSnap tackles this by building explanations directly into its architecture, ensuring every prediction is backed by citable evidence from user reviews.
EviSnap's process is a two-stage pipeline. First, it uses a large language model (LLM) offline to distill thousands of noisy product reviews into compact, structured 'facet cards.' Each card represents a specific product aspect (like 'plot complexity' or 'character development') and is paired with verbatim sentences from reviews that support it. These facets are then clustered to form a domain-agnostic 'concept bank.' The system calculates user preferences and item traits based on evidence-weighted activations of these concepts.
A key innovation is its simplicity for cross-domain transfer. Instead of complex neural mappings, EviSnap uses a single linear layer to map a user's concept activations from a source domain (e.g., Books) to a target domain (e.g., Movies). A final linear scoring head produces a recommendation score that is inherently decomposable, showing exactly which concepts contributed and by how much. This allows for powerful 'what-if' counterfactual edits, where users or developers can see how a score would change if a cited piece of evidence were added or removed.
The team validated EviSnap on the large-scale Amazon Reviews dataset, testing six transfer tasks between the Books, Movies, and Music domains. The framework not only consistently outperformed strong baseline models in accuracy but also passed rigorous deletion- and sufficiency-based tests for explanation faithfulness. This means its cited evidence is both necessary and sufficient for the predictions it makes, moving beyond plausible-sounding but ungrounded LLM-generated rationales.
- Uses an LLM offline to create 'facet cards' from reviews, each with verbatim supporting evidence sentences.
- Employs a simple linear map for cross-domain user transfer, enabling exact score decomposition and counterfactual edits.
- Outperformed baselines in six Amazon domain transfers (Books, Movies, Music) while passing tests for explanation faithfulness.
Why It Matters
Provides transparent, auditable AI recommendations for e-commerce and streaming, building user trust and enabling actionable feedback.