Spectral Graph Sparsification Preserves Representation Geometry in Graph Neural Networks
New proof shows sparsification keeps embeddings stable within O(ε) perturbation bounds.
A new theoretical paper by Sanjukta Krishnagopal tackles a fundamental question in graph neural networks (GNNs): does spectral graph sparsification, commonly used to reduce graph complexity and speed up computation, distort the geometry of learned embeddings? The author proves that for polynomial-filter GNNs, any ε-spectral sparsifier induces only O(ε) perturbations in polynomial graph filters, multilayer hidden representations, and their Gram matrices. These guarantees translate to stability of squared pairwise distances, class means, and covariance structures in embedding space. Additionally, under smoothness and boundedness assumptions, gradient descent on dense and sparsified graphs produces weight trajectories whose separation grows at most proportionally to the sparsification distortion.
Empirically, the paper validates these theoretical predictions using effective-resistance sparsification on synthetic graphs and real-world datasets including FashionMNIST, Cora, and Paul15. Even under substantial sparsification, the Gram matrix and training dynamics show low divergence, consistent with the predicted stability. Hidden Gram preservation strongly predicts neighborhood preservation and class-centroid stability. The work provides a rigorous foundation for using spectral sparsification to speed up GNN training and inference without losing the geometric fidelity essential for interpretability tasks. This bridges a gap between graph reduction and representation learning, offering practitioners confidence in applying sparsification for large-scale GNN deployments.
- Proves that ε-spectral sparsification induces O(ε) perturbations in polynomial GNN filters, hidden representations, and Gram matrices.
- Guarantees stability of squared pairwise distances, class means, and covariance structure in embedding space under sparsification.
- Empirically validated on FashionMNIST, Cora, and Paul15 - Gram matrix preservation strongly predicts neighborhood and class-centroid stability.
Why It Matters
Enables faster GNN inference on large graphs without losing the geometric structure needed for interpretability and downstream tasks.