Research & Papers

Attractor Patch Networks: Reducing Catastrophic Forgetting with Routed Low-Rank Patch Experts

Scientists create a smarter AI brain that adapts without losing its memory.

Deep Dive

Researchers have developed a new component called Attractor Patch Networks (APN) for AI language models. It replaces a standard, dense part of the model with a bank of specialized 'patches'. A router selects only a few relevant patches for each word, making the model more efficient and adaptable. In tests, this design dramatically reduced 'catastrophic forgetting', allowing models to learn new tasks while retaining 2.6 times more of their original knowledge compared to standard methods.

Why It Matters

This could enable more flexible and durable AI systems that continuously learn from new data streams.