Fuzzy Encoding-Decoding to Improve Spiking Q-Learning Performance in Autonomous Driving
New architecture closes the performance gap between spiking and traditional neural networks on driving benchmarks.
A research team including Aref Ghoreishee, Abhishek Mishra, and Lifeng Zhou has published a paper introducing a novel 'fuzzy encoder-decoder' architecture designed to overcome key limitations in spiking neural networks (SNNs) for autonomous driving. The core problem they address is the performance gap between spiking and traditional deep Q-networks (DQNs). When dense visual data from cameras is converted into the sparse, binary spike trains that SNNs use, significant information is lost. Furthermore, the Q-value estimates produced by spiking networks are often weakly discriminative, making it hard for an AI driver to choose the best action.
Their solution employs a two-part system. The encoder uses trainable fuzzy membership functions to create richer, population-based spike representations from raw sensor data. The decoder then uses a lightweight neural network to reconstruct continuous, precise Q-values from those spiking outputs. This end-to-end approach preserves critical information through the spiking process. In experiments on the standard HighwayEnv autonomous driving benchmark, their architecture substantially improved decision-making accuracy, effectively closing the performance gap with non-spiking, energy-inefficient counterparts. This demonstrates a clear path toward leveraging the ultra-low-power potential of neuromorphic hardware for real-time autonomous systems.
- Architecture uses a fuzzy encoder to create rich spike representations from visual data, reducing information loss.
- A lightweight neural decoder reconstructs precise Q-values, solving the problem of weakly discriminative spike-based estimates.
- Tested on HighwayEnv, it closes the performance gap between spiking and traditional deep Q-networks for driving tasks.
Why It Matters
It makes ultra-low-power neuromorphic chips a more viable hardware for real-time, vision-based autonomous driving AI.