Compressive single-pixel imaging via a wavelength-multiplexed spatially incoherent diffractive optical processor
Researchers combine wavelength-multiplexed optics with a 2-layer neural network to dramatically accelerate imaging.
A research team from UCLA, led by Professor Aydogan Ozcan, has published a breakthrough in computational imaging that dramatically accelerates single-pixel imaging (SPI). Their system, detailed in a new arXiv paper, tackles SPI's fundamental bottleneck: low measurement efficiency and long data-acquisition times. The innovation lies in a hybrid design that offloads complex linear transformations to a passive, static optical component—a wavelength-multiplexed, spatially incoherent diffractive processor. This processor is configured via data-free optimization to act as a pre-programmed transformation matrix, encoding the spatial information of an object across multiple wavelengths of light simultaneously upon illumination.
The encoded light is then captured by a single-pixel detector, and the resulting spectral data is fed into a compact, jointly trained digital artificial neural network (ANN) with just two hidden nonlinear layers. This shallow ANN rapidly decodes the spectral information to reconstruct the original high-resolution image. The team validated their concept both numerically and with a proof-of-concept experiment using an array of LEDs. By moving the computationally heavy linear transformation into the optical domain and keeping the digital neural network small, the system achieves what the authors term 'compressive SPI,' significantly boosting speed and efficiency.
This framework represents a major shift in computational imaging architecture. It demonstrates that deep learning can be used not just for processing data, but for co-designing the physical optical hardware that collects it. The static nature of the diffractive processor after optimization means the system is efficient and potentially low-cost for deployment. The researchers highlight its potential utility in fields where traditional cameras are impractical, bulky, or too slow, such as biomedical imaging inside scattering tissues, lightweight autonomous devices, and remote sensing applications.
- Hybrid optical-electronic system uses a static diffractive processor for wavelength-multiplexed encoding, paired with a shallow 2-layer ANN for decoding.
- Overcomes the traditional speed limit of single-pixel imaging by performing compressive measurements, dramatically reducing data-acquisition time.
- Proof-of-concept was experimentally validated using an LED array, moving from simulation to a tangible hardware prototype.
Why It Matters
Enables high-speed, efficient imaging in scenarios where conventional cameras fail, like medical endoscopy or drone-based remote sensing.