Hierarchically Modular Dynamical Neural Network Relaxing in a Warped Space: Basic Model and its Characteristics
New neural architecture uses 'warped space' and biological learning principles for more efficient AI training.
Researchers Kazuyoshi Tsutsumi and Ernst Niebur have introduced a novel neural network architecture called the Hierarchically Modular Dynamical Neural Network (HMDNN) that operates in what they term a 'warped space.' The model features distinct internal and external spaces connected through layered internetworks consisting of paired forward and backward subnets. These subnets contain static neurons with instantaneous responses, while the internal space houses dynamic neurons with large time constants that determine the system's overall temporal characteristics. The architecture is designed to minimize a specific energy function, creating a framework where state variables relax in this warped space through cooperation between dynamic and static neuron types.
The HMDNN operates in two distinct modes: learning mode and association mode. In learning mode, synaptic weights in the internetwork are modified through strong inputs that correspond to repetitive neuronal bursting—patterns resembling sinusoidal or quasi-sinusoidal waves in nerve impulse density or membrane potential. This biological learning mechanism enables the formation of two-dimensional mapping relationships using signals of different frequencies, operating on principles similar to Lissajous curves. In association mode, the system demonstrates unique convergence properties where speed varies significantly based on previously trained mapping relationships, resulting in curved rather than straight convergence trajectories in two-dimensional models with nonlinear mapping internetworks.
The researchers further explore a constrained association mode with predetermined target trajectories, revealing that output trajectories in the internal space are generated through inverse mapping from the external space based on the forward subnet's mapping relationship. This 44-page technical paper, originally submitted in November 2022 and revised in April 2026, presents 22 figures illustrating the model's characteristics and represents a significant departure from conventional neural network architectures by incorporating biological learning principles and spatial warping concepts that could influence future AI system design.
- Architecture uses 'warped space' with separate internal/external spaces connected via forward/backward subnets containing static and dynamic neurons
- Employs biological learning mechanisms: synaptic weight modification through neuronal bursting resembling sinusoidal waves in nerve impulse density
- Operates in two modes: learning mode for weight training using Lissajous curve principles, and association mode with curved convergence trajectories
Why It Matters
Could lead to more biologically plausible AI systems that learn more efficiently and solve problems with human-like reasoning patterns.