A 3D atrous convolutional long short-term memory network for background subtraction

Zhihang Hu, Turki Turki, Nhathai Phan, Jason T.L. Wang

Research output: Contribution to journalArticlepeer-review

17 Scopus citations

Abstract

Background subtraction, or foreground detection, is a challenging problem in video processing. This problem is mainly concerned with a binary classification task, which designates each pixel in a video sequence as belonging to either the background or foreground scene. Traditional approaches for tackling this problem lack the power of capturing deep information in videos from a dynamic environment encountered in real-world applications, thus often achieving low accuracy and unsatisfactory performance. In this paper, we introduce a new 3-D atrous convolutional neural network, used as a deep visual feature extractor, and stack convolutional long short-term memory (ConvLSTM) networks on top of the feature extractor to capture long-term dependences in video data. This novel architecture is named a 3-D atrous ConvLSTM network. The new network can capture not only deep spatial information but also long-term temporal information in the video data. We train the proposed 3-D atrous ConvLSTM network with focal loss to tackle the class imbalance problem commonly seen in background subtraction. Experimental results on a wide range of videos demonstrate the effectiveness of our approach and its superiority over existing methods.

Original languageEnglish (US)
Article number8423055
Pages (from-to)43450-43459
Number of pages10
JournalIEEE Access
Volume6
DOIs
StatePublished - Jul 27 2018

All Science Journal Classification (ASJC) codes

  • Computer Science(all)
  • Materials Science(all)
  • Engineering(all)

Keywords

  • 3D atrous convolution
  • Background subtraction
  • convolutional LSTM network
  • deep learning
  • foreground segmentation

Fingerprint Dive into the research topics of 'A 3D atrous convolutional long short-term memory network for background subtraction'. Together they form a unique fingerprint.

Cite this