Research & Papers

DyGnROLE: Modeling Asymmetry in Dynamic Graphs with Node-Role-Oriented Latent Encoding

New transformer architecture captures asymmetric node behavior in evolving networks like social media.

Deep Dive

Researchers Tyler Bonnet and Marek Rei have introduced DyGnROLE (Dynamic Graph Node-Role-Oriented Latent Encoding), a novel transformer architecture that addresses a fundamental limitation in dynamic graph learning. Most existing models treat source and destination nodes symmetrically, using shared parameters that fail to capture the inherent asymmetry in real-world networks like social media interactions, financial transactions, or citation networks. DyGnROLE explicitly disentangles these representations through separate embedding vocabularies and role-semantic positional encodings, allowing the model to learn distinct structural and temporal contexts for each node role. This approach recognizes that nodes behave differently when initiating versus receiving connections, a nuance previously overlooked in dynamic graph architectures.

The model's effectiveness is enhanced by a self-supervised pretraining objective called Temporal Contrastive Link Prediction (TCLP), which uses unlabeled interaction history to encode structural biases without requiring annotated data. This makes DyGnROLE particularly valuable for low-label regimes where obtaining ground truth is expensive or impractical. Evaluation on future edge classification tasks demonstrates that DyGnROLE substantially outperforms diverse state-of-the-art baselines, establishing role-aware modeling as a superior strategy for dynamic graph learning. The research represents a significant step toward more realistic graph representations that can better predict evolving network behaviors in applications ranging from recommendation systems to fraud detection and epidemiological modeling.

Key Points
  • Uses separate embedding vocabularies for source/destination nodes to capture asymmetric behavior
  • Introduces Temporal Contrastive Link Prediction (TCLP) for self-supervised pretraining without labeled data
  • Substantially outperforms state-of-the-art baselines on future edge classification tasks

Why It Matters

Enables more accurate predictions in evolving networks like social platforms and financial systems where node roles matter.