Research & Papers

Receding-Horizon Maximum-Likelihood Estimation of Neural-ODE Dynamics and Thresholds from Event Cameras

A novel algorithm enables AI to learn continuous dynamics from sparse, asynchronous event camera data.

Deep Dive

A team of researchers has introduced a novel method for training Neural ODEs (Neural Ordinary Differential Equations) in real-time using data from event cameras. Event cameras are bio-inspired sensors that output asynchronous pixel-level brightness changes instead of full frames, offering ultra-low latency and high dynamic range. The core challenge has been performing online, maximum-likelihood estimation of the continuous-time dynamics (modeled by a Neural ODE) and the sensor's unknown contrast threshold from this sparse, event-based data stream. The proposed algorithm treats events as a history-dependent marked point process and formulates a log-likelihood function consisting of an event term and a compensator integral.

The technical innovation is a receding-horizon estimator that performs a few gradient steps per update on a sliding window of recent events. For efficient streaming, it maintains only two scalars per pixel—the last event time and the estimated log-intensity at that time—and approximates the complex compensator integral via Monte Carlo pixel subsampling. Synthetic experiments validated the method's ability to jointly recover dynamics parameters and the contrast threshold, while characterizing the accuracy-latency trade-offs relative to the chosen window length. This work bridges the gap between modern continuous-time AI models (Neural ODEs) and next-generation, efficient sensing hardware, paving the way for adaptive systems in robotics and autonomous vehicles.

Key Points
  • Enables online training of Neural ODEs from asynchronous event camera streams, a previously challenging task.
  • Uses a receding-horizon window and Monte Carlo subsampling, requiring storage of only 2 scalars per pixel for real-time operation.
  • Jointly estimates both the continuous system dynamics and the sensor's unknown contrast threshold via maximum-likelihood.

Why It Matters

Unlocks real-time, adaptive AI for robotics and autonomous systems using efficient, next-generation event-based vision sensors.