Toward Practical Equilibrium Propagation: Brain-inspired Recurrent Neural Network with Feedback Regulation and Residual Connections
Equilibrium Propagation achieves BP-level performance with 100x less compute
A new paper from researchers Zhuo Liu and Tao Chen introduces FRE-RNN (Feedback-regulated Residual Recurrent Neural Network), a biologically plausible architecture that finally makes Equilibrium Propagation (EP) a viable alternative to backpropagation. EP has long been prized for its brain-like learning signals—no need for separate forward/backward passes—but suffered from severe training instability and prohibitive computational costs that kept it confined to toy problems. The authors solve both issues by adding feedback regulation that dynamically reduces the spectral radius of the weight matrices, enabling orders-of-magnitude faster convergence. Residual connections further prevent vanishing gradients in deep recurrent networks, allowing EP to scale to complex tasks.
The results are striking: on standard benchmarks, FRE-RNN achieves accuracy on par with backpropagation while slashing EP's computational overhead by multiple orders of magnitude. This means training times drop from impractical to competitive, all while retaining the biological plausibility that makes EP attractive for neuromorphic hardware. The authors note that the feedback regulation technique also offers guidance for implementing in-situ learning in physical neural networks—a critical step toward energy-efficient AI chips that learn on the fly. By removing the key barriers to EP adoption, this work could finally bridge the gap between brain-inspired learning algorithms and practical large-scale AI systems.
- Feedback regulation reduces the spectral radius, enabling orders-of-magnitude faster convergence for Equilibrium Propagation
- FRE-RNN matches backpropagation accuracy on benchmark tasks while slashing EP's computational cost
- Residual connections with brain-inspired topologies eliminate vanishing gradients in deep recurrent networks
Why It Matters
Makes biologically plausible learning scalable and competitive, enabling in-situ training on neuromorphic hardware at scale.