Attractor FCM
A new FCM type that avoids heuristics, learns with backprop through time.
Alexis Kafantaris has published a new paper on arXiv introducing Attractor FCM, a novel type of Fuzzy Cognitive Map (FCM) that departs from traditional Hebbian, agentic, or hybrid approaches. Instead, it uses gradient descent with physics constraints, implemented through a Jacobian-based architecture. The model incorporates residual memory and backpropagation through time, along with a fixed-point anchor that recursively updates weights while preserving system memory. This setup ensures convergence to a fixed point, where backpropagation unrolls the system for accurate gradient-based error minimization.
A key innovation is the use of Newton's method to find the system's fixed-point attractor, after which gradient descent adaptively reshapes the loss landscape. An adaptive term directly manipulates weights through the attractor dynamics, preventing premature convergence to local minima by adjusting according to sigmoid saturation. Additionally, a causal mask filters updates to enforce physics-based constraints, respecting initial expert opinions. This yields efficient error reduction to target values, making Attractor FCM a promising tool for modeling complex dynamical systems in scientific computing, control, and AI.
- Uses gradient descent and backpropagation through time instead of Hebbian or agentic learning.
- Employs Newton's method for fixed-point attractor and adaptive gradient descent to avoid local minima.
- Integrates a causal mask to enforce physics constraints and expert knowledge during training.
Why It Matters
Provides a principled, physics-aware learning framework for FCMs, improving accuracy in scientific and engineering models.