Indexed by:
Abstract:
As a neuromorphic vision sensor with ultra-high temporal resolution, spike camera shows great potential in high-speed imaging. To capture color information of dynamic scenes, color spike camera (CSC) has been invented with a Bayer-pattern color filter array (CFA) on the sensor. Some spike camera reconstruction methods try to train end-to-end models by massive synthetic data pairs. However, there are gaps between synthetic and real-world captured data. The distribution of training data impacts model generalizability. In this paper, we propose a zero-shot learning-based method for CSC reconstruction to restore color images from a Bayer-pattern spike stream without pre-training. As the Bayer-pattern spike stream consists of binary signal arrays with missing pixels, we propose to leverage temporally neighboring spike signals of frame, pixel and interval levels to restore color channels. In particular, we employ a zero-shot learning-based scheme to iteratively refine the output via temporally neighboring spike stream clips. To generate high-quality pseudo-labels, we propose to exploit temporally neighboring pixels along the motion direction to estimate the missing pixels. Besides, a temporally neighboring spike interval-based representation is developed to extract temporal and color features from the binary Bayer-pattern spike stream. Experimental results on real-world captured data demonstrate that our method can restore color images with better visual quality than compared methods.
Keyword:
Reprint Author's Address:
Email:
Source :
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING
ISSN: 2573-0436
Year: 2025
Volume: 11
Page: 129-141
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 5
Affiliated Colleges: