Research & Papers

Graph Normalization: Fast Binarizing Dynamics for Differentiable MWIS

New method converges to optimal independent sets with 1M edges in seconds.

Deep Dive

Laurent Guigues has introduced Graph Normalization (GN), a novel dynamical system that provides a differentiable approximation engine for the NP-hard Maximum Weight Independent Set (MWIS) problem. Unlike traditional methods like Belief Propagation, GN is proven to always converge to a binary indicator of a maximum independent set. It achieves this through an exact Majorization-Minimization step, systematically improving the MWIS primal objective via quasi-Newton descent. The paper also establishes an equivalence between GN and the Replicator Dynamics of a nonlinear evolutionary game, where vertices compete for inclusion, and shows that GN follows Fisher's Fundamental Theorem of Natural Selection, with average fitness equating to the MWIS objective.

GN's performance is striking: on real-world benchmarks with up to 1 million edges, it identifies solutions within 1% of the best known results in mere seconds on a CPU. For the Assignment Problem, GN acts as a variant of the Sinkhorn algorithm that naturally converges to a hard assignment while generalizing to arbitrary constraint graphs. The framework opens new avenues for deep learning architectures requiring differentiable, hard decisions under constraints, with applications in structured sparse attention, dynamic network pruning, and Mixture-of-Experts. Beyond core AI, GN enables end-to-end learning of constrained optimization in computer vision, computational biology, and resource allocation.

Key Points
  • GN always converges to a binary indicator of a Maximum Independent Set, unlike Belief Propagation.
  • On benchmarks with up to 1M edges, GN achieves solutions within 1% of optimal in seconds on a CPU.
  • Provides differentiable, hard combinatorial decisions for deep learning: structured sparse attention, pruning, MoE.

Why It Matters

Graph Normalization makes NP-hard combinatorial optimization differentiable and fast, unlocking end-to-end constrained learning for real-world AI systems.