PACIFIER: Pacing Opinion Depolarization via a Unified Graph Learning Framework
New framework trains on synthetic graphs but generalizes to 155K-node networks
PACIFIER, developed by Mingkai Liao, is the first graph-learning and graph reinforcement learning framework to tackle opinion polarization under the Friedkin-Johnsen model. Traditional approaches treat polarization moderation as an analytical optimization problem, relying on linear steady-state analysis and repeated equilibrium recomputation — which scales poorly. PACIFIER reformulates the problem as a graph-based sequential planning task, using a reinforcement learning agent to decide interventions step by step. It supports cost-aware moderation, continuous opinions, and topology-altering node removal, making it far more flexible than prior methods.
The core innovation is PACIFIER's ability to train on small synthetic graphs (fewer than 50 nodes) and generalize to massive real-world networks. To achieve this, the framework integrates four scale-compatible designs: a two-echo-chamber training distribution, anchor-and-mark history encoding, normalized global features, and residual-polarization rewards. In experiments on 15 real Twitter networks (up to 155,599 nodes), PACIFIER matched analytical solvers on minimal intervention tasks and consistently outperformed baselines on cost-aware and node-removal settings. The RL variant proved especially effective when long-horizon costs or structural consequences outweighed immediate gains.
- PACIFIER reformulates polarization moderation as a graph-based sequential planning problem using graph reinforcement learning
- Trained on synthetic graphs under 50 nodes, it generalizes to real Twitter networks with up to 155,599 nodes
- Outperforms baselines on cost-aware moderation, continuous opinions, and topology-altering node removal scenarios
Why It Matters
Graph AI can now scale polarization moderation from toy models to real social networks, enabling practical tools for reducing online echo chambers.