Visualising the Attractor Landscape of Neural Cellular Automata
A new paper applies manifold learning and topological data analysis to pry open the NCA black box.
A team of researchers including James Stovold, Mia-Katrin Kvalsund, Harald Michael Ludwig, and Varun Sharma has published a significant paper on arXiv titled 'Visualising the Attractor Landscape of Neural Cellular Automata'. The work addresses a critical challenge in AI research: as Neural Cellular Automata (NCAs) are increasingly used for complex tasks beyond simple artificial life models, their inherent lack of interpretability becomes a major hurdle. The paper aims to pry open this 'black box' to understand what these systems have actually learned.
To achieve this, the researchers applied a suite of advanced analytical techniques from two key fields. From manifold learning, they used principal components analysis (PCA) and both dense and sparse autoencoders. From topological data analysis (TDA), they employed persistent homology. Their goal was to capture and visualize the NCA's underlying 'attractor landscape'—the behavioral manifold that governs its state transitions. The results revealed a fascinating dichotomy: when analyzing the entire NCA state as a single macroscopic data point, the underlying manifold is often quite simple and can be captured effectively. However, when the analysis zooms in to the microscopic level of individual cells, the manifold becomes highly complex, requiring more sophisticated techniques for meaningful interpretation.
This research, submitted to the ALIFE 2026 conference, represents a crucial step toward making NCAs more transparent and trustworthy. By providing concrete methods to visualize and analyze their learned behaviors, it enables engineers and scientists to better understand, debug, and ultimately design more reliable and effective NCA-based systems for future applications in graphics, materials science, and complex system simulation.
- Applies manifold learning (PCA, autoencoders) and topological data analysis (persistent homology) to analyze Neural Cellular Automata (NCAs).
- Finds a simple behavioral manifold at the macroscopic (full-state) level, but high complexity at the microscopic (per-cell) level.
- Provides new, concrete methods for interpreting the 'black box' of NCAs as they move beyond toy models into practical applications.
Why It Matters
This work provides essential tools for interpreting complex AI systems, making them more transparent and reliable for real-world engineering tasks.