Deep learning for video compressive sensing

Mu Qiao, Ziyi Meng, Jiawei Ma, Xin Yuan

Research output: Contribution to journalArticlepeer-review

114 Scopus citations


We investigate deep learning for video compressive sensing within the scope of snapshot compressive imaging (SCI). In video SCI, multiple high-speed frames are modulated by different coding patterns and then a low-speed detector captures the integration of these modulated frames. In this manner, each captured measurement frame incorporates the information of all the coded frames, and reconstruction algorithms are then employed to recover the high-speed video. In this paper, we build a video SCI system using a digital micromirror device and develop both an end-to-end convolutional neural network (E2E-CNN) and a Plug-and-Play (PnP) framework with deep denoising priors to solve the inverse problem. We compare them with the iterative baseline algorithm GAP-TV and the state-of-the-art DeSCI on real data. Given a determined setup, a well-trained E2E-CNN can provide video-rate high-quality reconstruction. The PnP deep denoising method can generate decent results without task-specific pre-training and is faster than conventional iterative algorithms. Considering speed, accuracy, and flexibility, the PnP deep denoising method may serve as a baseline in video SCI reconstruction. To conduct quantitative analysis on these reconstruction algorithms, we further perform a simulation comparison on synthetic data. We hope that this study contributes to the applications of SCI cameras in our daily life.

Original languageEnglish (US)
Article number030801
JournalAPL Photonics
Issue number3
StatePublished - Mar 1 2020

All Science Journal Classification (ASJC) codes

  • Atomic and Molecular Physics, and Optics
  • Computer Networks and Communications


Dive into the research topics of 'Deep learning for video compressive sensing'. Together they form a unique fingerprint.

Cite this