Research & Papers

Graph-Informed Adversarial Modeling: Infimal Subadditivity of Interpolative Divergences

New mathematical proof shows breaking monolithic GAN discriminators into smaller, graph-aligned components improves training.

Deep Dive

A team from Heriot-Watt University and the Maxwell Institute for Mathematical Sciences has published a significant theoretical advance for training Generative Adversarial Networks (GANs) on structured data. In their paper 'Graph-Informed Adversarial Modeling: Infimal Subadditivity of Interpolative Divergences,' Panagiota Birmpa and Eric Joseph Hall prove a new 'infimal subadditivity' principle. This mathematical result shows that when the target data distribution factorizes according to a known Bayesian network (a graph representing variable dependencies), a global measure of discrepancy between the real and generated data can be controlled by averaging smaller, localized discrepancies aligned with the graph's individual 'families' of connected variables.

This theory provides a rigorous justification for moving away from standard 'graph-agnostic' GANs, which use one monolithic neural network as a discriminator to judge the entire data sample. Instead, it advocates for a 'graph-informed' architecture where separate, smaller discriminators are trained to evaluate specific, localized parts of the data structure. Crucially, the proof holds even if the generator network itself does not factorize according to the graph, making the approach more flexible. The authors extend their findings to other divergence measures like Integral Probability Metrics and present experiments demonstrating that this method leads to improved training stability and better recovery of the underlying data structure compared to baseline models.

Key Points
  • Proves 'infimal subadditivity' principle for interpolative divergences (e.g., f-divergences) when data follows a Bayesian network graph.
  • Provides theoretical foundation for replacing a single GAN discriminator with multiple localized discriminators aligned to the graph's structure.
  • Experiments show the graph-informed approach yields improved training stability and structural recovery versus graph-agnostic baselines.

Why It Matters

Enables more stable and interpretable generation of complex structured data like molecules, knowledge graphs, and causal models.