TY - GEN
T1 - BD-NET
T2 - 17th IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018
AU - He, Zhezhi
AU - Angizi, Shaahin
AU - Rakin, Adnan Siraj
AU - Fan, Deliang
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/7
Y1 - 2018/8/7
N2 - In this work, we propose a multiplication-less deep convolution neural network, called BD-NET. As far as we know, BD-NET is the first to use binarized depthwise separable convolution block as the drop-in replacement of conventional spatial-convolution in deep convolution neural network (CNN). In BD-NET, the computation-expensive convolution operations (i.e. Multiplication and Accumulation) are converted into hardware-friendly Addition/Subtraction operations. In this work, we first investigate and analyze the performance of BD-NET in terms of accuracy, parameter size and computation cost, w.r.t various network configurations. Then, the experiment results show that our proposed BD-NET with binarized depthwise separable convolution can achieve even higher inference accuracy to its baseline CNN counterpart with full-precision conventional convolution layer on the CIFAR-10 dataset. From the perspective of hardware implementation, the convolution layer of BD-NET achieves up to 97.2%, 88.9%, and 99.4% reduction in terms of computation energy, memory usage, and chip area respectively.
AB - In this work, we propose a multiplication-less deep convolution neural network, called BD-NET. As far as we know, BD-NET is the first to use binarized depthwise separable convolution block as the drop-in replacement of conventional spatial-convolution in deep convolution neural network (CNN). In BD-NET, the computation-expensive convolution operations (i.e. Multiplication and Accumulation) are converted into hardware-friendly Addition/Subtraction operations. In this work, we first investigate and analyze the performance of BD-NET in terms of accuracy, parameter size and computation cost, w.r.t various network configurations. Then, the experiment results show that our proposed BD-NET with binarized depthwise separable convolution can achieve even higher inference accuracy to its baseline CNN counterpart with full-precision conventional convolution layer on the CIFAR-10 dataset. From the perspective of hardware implementation, the convolution layer of BD-NET achieves up to 97.2%, 88.9%, and 99.4% reduction in terms of computation energy, memory usage, and chip area respectively.
KW - Binarized neural network
KW - Multiplication less
UR - http://www.scopus.com/inward/record.url?scp=85052128137&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052128137&partnerID=8YFLogxK
U2 - 10.1109/ISVLSI.2018.00033
DO - 10.1109/ISVLSI.2018.00033
M3 - Conference contribution
AN - SCOPUS:85052128137
SN - 9781538670996
T3 - Proceedings of IEEE Computer Society Annual Symposium on VLSI, ISVLSI
SP - 130
EP - 135
BT - Proceedings - 2018 IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2018
PB - IEEE Computer Society
Y2 - 9 July 2018 through 11 July 2018
ER -