Tuning-Free LLM Can Build A Strong Recommender Under Sparse Connectivity And Knowledge Gap Via Extracting Intent
A new AI framework uses a tuning-free LLM to extract user intent, boosting recommendations for cold-start items by 15%.
A team of researchers has introduced the Intent Knowledge Graph Recommender (IKGR), a novel framework that addresses core weaknesses in modern AI-powered recommendation systems. Current methods often rely on broad commonsense knowledge or existing, sparse knowledge graphs, struggling to capture precise user desires and to recommend niche or new items. IKGR's key innovation is using a tuning-free large language model (LLM) guided by retrieval-augmented generation (RAG) to explicitly extract granular 'intent' concepts—like 'cozy winter getaway' or 'beginner-friendly yoga'—from both user profiles and item descriptions. These intents become first-class nodes in a newly constructed knowledge graph, directly linking users to what they seek and items to what they offer.
To tackle data sparsity, the framework employs a mutual-intent connectivity strategy, which creates shorter semantic paths between users and hard-to-recommend 'long-tail' items without needing complex data fusion. Finally, a lightweight Graph Neural Network (GNN) layer operates on this intent-enhanced graph to produce fast, accurate recommendations. In extensive testing on public and private datasets, IKGR consistently outperformed strong baseline models. Its most significant gains were on cold-start scenarios (new users or items with little data) and long-tail item recommendations, all while maintaining efficiency through a fully offline LLM processing pipeline that avoids costly model fine-tuning.
- Uses a tuning-free LLM with RAG to extract explicit 'intent' nodes, creating a novel user-item-intent knowledge graph.
- Introduces a mutual-intent connectivity strategy to densify the graph, improving recommendations for sparse, long-tail items by over 15%.
- Outperforms existing baselines on cold-start scenarios while remaining efficient via a lightweight GNN and fully offline processing.
Why It Matters
This enables more accurate, personalized recommendations for new users and niche products without the high cost of fine-tuning LLMs.