TimeMM: Time-as-Operator Spectral Filtering for Dynamic Multimodal Recommendation
New model adapts to evolving user tastes by weighting visual and text cues differently over time.
TimeMM, developed by Wei Yang and colleagues, introduces a novel approach to multimodal recommendation by integrating time as a core operator. Instead of relying on static interaction graphs or coarse temporal heuristics, it uses parametric temporal kernels to reweight edges on user-item graphs based on interaction recency. This allows the model to generate component-specific representations without explicit eigendecomposition, enabling it to capture the continuous evolution of user preferences.
The framework further enhances performance through adaptive spectral filtering, which mixes different operator banks according to temporal context to produce prediction-specific spectral responses. It also introduces spectral-aware modality routing to calibrate the contributions of visual and textual features based on the same temporal context, addressing the challenge that different modalities dominate at different times. A spectral diversity regularization prevents filter-bank collapse, ensuring diverse expert behaviors. Tests on real-world benchmarks show TimeMM consistently outperforms existing state-of-the-art multimodal recommenders, all while maintaining linear-time scalability.
- TimeMM uses parametric temporal kernels to reweight user-item graph edges based on interaction recency, avoiding explicit eigendecomposition.
- Adaptive spectral filtering mixes operator banks according to temporal context for prediction-specific spectral responses.
- Spectral-aware modality routing calibrates visual and textual contributions based on the same temporal context, outperforming state-of-the-art recommenders.
Why It Matters
TimeMM enables more accurate, dynamic recommendations by modeling how user preferences and modality importance shift over time.