TY - GEN
T1 - Reconstruction-free deep convolutional neural networks for partially observed images
AU - Nair, Arun
AU - Liu, Luoluo
AU - Rangamani, Akshay
AU - Chin, Peter
AU - Lediju Bell, Muyinatu A.
AU - Tran, Trac D.
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - Conventional image discrimination tasks are performed on fully observed images. In challenging real imaging scenarios, where sensing systems are energy demanding or need to operate with limited bandwidth and exposure-time budgets, or defective pixels, where the data collected often suffers from missing information, and this makes the task extremely hard. In this paper, we leverage Convolutional Neural Networks (CNNs) to extract information from partially observed images. While pre-trained CNNs fail significantly even with such a small percentage of the input missing, our proposed framework demonstrates the ability to overcome it after training on fully-observed and partially-observed images at a few observation ratios. We demonstrate that our method is indeed reconstruction-free, retraining-free and generalizable to previously untrained-on observation ratios and it remains effective in two different visual tasks - image classification and object detection. Our framework performs well even for test images with only 10% of pixels available and outperforms the reconstruct-then-classify pipeline in these challenging scenarios for small observation fractions.
AB - Conventional image discrimination tasks are performed on fully observed images. In challenging real imaging scenarios, where sensing systems are energy demanding or need to operate with limited bandwidth and exposure-time budgets, or defective pixels, where the data collected often suffers from missing information, and this makes the task extremely hard. In this paper, we leverage Convolutional Neural Networks (CNNs) to extract information from partially observed images. While pre-trained CNNs fail significantly even with such a small percentage of the input missing, our proposed framework demonstrates the ability to overcome it after training on fully-observed and partially-observed images at a few observation ratios. We demonstrate that our method is indeed reconstruction-free, retraining-free and generalizable to previously untrained-on observation ratios and it remains effective in two different visual tasks - image classification and object detection. Our framework performs well even for test images with only 10% of pixels available and outperforms the reconstruct-then-classify pipeline in these challenging scenarios for small observation fractions.
KW - Compressed Measurements
KW - Convolutional Neural Networks
KW - Deep Learning
KW - Image Classification
KW - Object Detection
UR - http://www.scopus.com/inward/record.url?scp=85063090504&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063090504&partnerID=8YFLogxK
U2 - 10.1109/GlobalSIP.2018.8646498
DO - 10.1109/GlobalSIP.2018.8646498
M3 - Conference contribution
AN - SCOPUS:85063090504
T3 - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
SP - 400
EP - 404
BT - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018
Y2 - 26 November 2018 through 29 November 2018
ER -