Research & Papers

Rethinking Convolutional Networks for Attribute-Aware Sequential Recommendation

Researchers propose ConvRec with linear complexity, outperforming attention-based models on 4 datasets.

Deep Dive

A new paper from researchers at the University of Hildesheim proposes ConvRec, a convolutional approach to sequential recommendation that challenges the dominance of self-attention mechanisms. Existing attribute-aware sequential recommendation models typically use self-attention to aggregate user history into a unified representation, but suffer from quadratic computational and memory complexity that limits their ability to process long sequences. ConvRec addresses this by employing convolutional layers in a hierarchical, down-scaled fashion, generating compact yet expressive sequence representations with linear complexity. Each layer gradually aggregates neighboring items to build a comprehensive understanding of user preferences.

Extensive experiments on four real-world datasets (including MovieLens, Amazon, and Yelp) demonstrate that ConvRec outperforms state-of-the-art models such as SASRec, BERT4Rec, and S3-Rec. The model achieves higher accuracy while requiring significantly less memory and computation, making it practical for long user histories. Accepted at IJCAI-ECAI 2026, the work highlights the untapped potential of convolution-based architectures for efficient sequence modeling. The implementation code and datasets are publicly available, enabling further research and integration into production recommendation systems.

Key Points
  • ConvRec achieves linear O(n) complexity vs quadratic O(n²) in self-attention models, enabling long-sequence processing
  • Outperforms state-of-the-art models (SASRec, BERT4Rec, S3-Rec) on 4 real-world datasets
  • Accepted at IJCAI-ECAI 2026 with open-source code and datasets available

Why It Matters

Linear-complexity recommendation models enable longer user histories, improving personalization at scale without exploding costs.