Research & Papers

Benchmarking local Hebbian learning rules for memory storage and prototype extraction

New benchmark of seven Hebbian rules reveals Bayesian approaches dominate memory storage and recall

Deep Dive

Associative memory, or content-addressable memory, is a core function in both computer science and cognitive neuroscience. This paper by Lansner et al. (arXiv:2605.01074) systematically benchmarks seven different Hebbian learning rules in non-modular and modular recurrent networks using winner-take-all (WTA) dynamics. The study focuses on two key capabilities: pattern storage (memory capacity and weight information) and prototype extraction, where the network must recall the original prototype from a distorted instance. The rules are tested on moderately sparse binary patterns, and the benchmark also evaluates sensitivity to correlations in the training data.

Results show a clear hierarchy: the classic additive Hebb rule has the worst capacity, covariance learning is robust but offers only moderate performance, and Bayesian-Hebbian learning rules consistently deliver the highest capacity across almost all test conditions. These findings provide a rigorous comparison for researchers designing neuromorphic memory systems and reinforcement learning architectures that rely on local, biologically plausible learning rules. The Bayesian-Hebbian approach, which incorporates prior knowledge about pattern statistics, emerges as the most effective for both storage and prototype extraction tasks.

Key Points
  • Seven Hebbian learning rules benchmarked: additive, covariance, and Bayesian-Hebbian variants in WTA networks
  • Bayesian-Hebbian rules achieved highest memory storage and prototype extraction capacity under most conditions
  • Additive Hebb rule performed worst; covariance learning was robust but only moderate capacity

Why It Matters

Better Hebbian learning rules could enable more efficient neuromorphic memory systems and biologically inspired AI models