Abstract
Background subtraction, or foreground detection, is a challenging problem in video processing. This problem is mainly concerned with a binary classification task, which designates each pixel in a video sequence as belonging to either the background or foreground scene. Traditional approaches for tackling this problem lack the power of capturing deep information in videos from a dynamic environment encountered in real-world applications, thus often achieving low accuracy and unsatisfactory performance. In this paper, we introduce a new 3-D atrous convolutional neural network, used as a deep visual feature extractor, and stack convolutional long short-term memory (ConvLSTM) networks on top of the feature extractor to capture long-term dependences in video data. This novel architecture is named a 3-D atrous ConvLSTM network. The new network can capture not only deep spatial information but also long-term temporal information in the video data. We train the proposed 3-D atrous ConvLSTM network with focal loss to tackle the class imbalance problem commonly seen in background subtraction. Experimental results on a wide range of videos demonstrate the effectiveness of our approach and its superiority over existing methods.
Original language | English (US) |
---|---|
Article number | 8423055 |
Pages (from-to) | 43450-43459 |
Number of pages | 10 |
Journal | IEEE Access |
Volume | 6 |
DOIs | |
State | Published - Jul 27 2018 |
All Science Journal Classification (ASJC) codes
- General Computer Science
- General Materials Science
- General Engineering
Keywords
- 3D atrous convolution
- Background subtraction
- convolutional LSTM network
- deep learning
- foreground segmentation