Research & Papers

Learning to Emulate Chaos: Adversarial Optimal Transport Regularization

New method matches statistical properties of chaotic systems like weather and power grids.

Deep Dive

Chaotic systems, from weather to power grids, are notoriously hard to model with data-driven emulators due to sensitivity to initial conditions, which makes exact long-term forecasts infeasible. Traditional squared-error losses fail on noisy data, while prior regularization relied on handcrafted features. In this paper, Gabriel Melo, Leonardo Santiago, and Peter Y. Lu introduce a family of adversarial optimal transport objectives that jointly learn high-quality summary statistics and a physically consistent emulator. They theoretically analyze and experimentally validate two formulations: a Sinkhorn divergence (2-Wasserstein) and a WGAN-style dual (1-Wasserstein), demonstrating significantly improved long-term statistical fidelity across a variety of chaotic systems, including high-dimensional attractors.

This work bridges optimal transport theory and machine learning for dynamical systems, offering a principled way to match the statistical properties of chaotic attractors. By avoiding handcrafted features, the method adapts to diverse datasets and system complexities. The results suggest that adversarial optimal transport regularization can make AI emulators viable for critical applications like climate modeling, grid stability, and other domains where chaos dominates and accurate long-term predictions are essential.

Key Points
  • Proposes adversarial optimal transport objectives (Sinkhorn divergence and WGAN dual) for emulating chaotic systems.
  • Jointly learns summary statistics and a physically consistent emulator, avoiding handcrafted features.
  • Experiments show improved long-term statistical fidelity across high-dimensional chaotic attractors.

Why It Matters

Enables AI to reliably model chaotic systems like weather and power grids for long-term predictions.