Empowering Heterogeneous Graph Foundation Models via Decoupled Relation Alignment
Solves ‘Type Collapse’ and ‘Relation Confusion’ in multi-domain graph models.
Graph Foundation Models (GFMs) excel on homogeneous graphs but struggle with multi-domain heterogeneous graphs (MDHGs) due to cross-type feature shifts and intra-domain relation gaps. Existing global alignment methods like PCA or SVD enforce a shared feature space blindly, distorting type-specific semantics and disrupting original topologies. This leads to two fundamental failures: ‘Type Collapse’ (loss of distinct node-type characteristics) and ‘Relation Confusion’ (broken relation structures). To fix this, researchers from (authors Ziyu Zheng et al.) introduce Decoupled relation Subspace Alignment (DRSA), a novel plug-and-play preprocessing module.
DRSA shifts the paradigm by explicitly decoupling feature semantics from relation structures. It uses a dual-relation subspace projection mechanism to coordinate cross-type interactions in a shared low-rank relation subspace. Additionally, a feature-structure decoupled representation decomposes aligned features into a semantic projection component and a structural residual term, adaptively absorbing intra-domain variations. The method is optimized via a stable alternating minimization strategy based on Block Coordinate Descent. Extensive experiments on real-world benchmarks show DRSA seamlessly integrates with state-of-the-art GFMs, significantly enhancing cross-domain and few-shot knowledge transfer. The code is open-sourced.
- Prevents ‘Type Collapse’ and ‘Relation Confusion’ by decoupling features from relation structures.
- Uses dual-relation subspace projection and feature-structure decoupled representation for robust alignment.
- Boosts cross-domain and few-shot transfer on multiple benchmarks, outperforming PCA/SVD-based methods.
Why It Matters
Enables graph AI models to transfer knowledge across different domains without retraining, unlocking new applications in social networks and e-commerce.