Sparse BD-Net: A multiplication-less DNN with sparse binarized depth-wise separable convolution

Zhezhi He, Li Yang, Shaahin Angizi, Adnan Siraj Rakin, Deliang Fan

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

In this work, we propose a multiplication-less binarized depthwise-separable convolution neural network, called BD-Net. BD-Net is designed to use binarized depthwise separable convolution block as the drop-in replacement of conventional spatial-convolution in deep convolution neural network (DNN). In BD-Net, the computation-expensive convolution operations (i.e., Multiplication and Accumulation) are converted into energy-efficient Addition/Subtraction operations. For further compressing the model size while maintaining the dominant computation in addition/subtraction, we propose a brand-new sparse binarization method with a hardware-oriented structured sparsity pattern. To successfully train such sparse BD-Net, we propose and leverage two techniques: (1) a modified group-lasso regularization whose group size is identical to the capacity of basic computing core in accelerator and (2) a weight penalty clipping technique to solve the disharmony issue between weight binarization and lasso regularization. The experiment results show that the proposed sparse BD-Net can achieve comparable or even better inference accuracy, in comparison to the full precision CNN baseline. Beyond that, a BD-Net customized process-in-memory accelerator is designed using SOT-MRAM, which owns characteristics of high channel expansion flexibility and computation parallelism. Through the detailed analysis from both software and hardware perspectives, we provide an intuitive design guidance for software/hardware co-design of DNN acceleration on mobile embedded systems. Note that this journal submission is the extended version of our previous published paper in ISVLSI 2018 [24].

Original languageEnglish (US)
Article number15
JournalACM Journal on Emerging Technologies in Computing Systems
Volume16
Issue number2
DOIs
StatePublished - Jan 29 2020
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Software
  • Hardware and Architecture
  • Electrical and Electronic Engineering

Keywords

  • Deep neural network
  • in-memory computing
  • model compression

Fingerprint

Dive into the research topics of 'Sparse BD-Net: A multiplication-less DNN with sparse binarized depth-wise separable convolution'. Together they form a unique fingerprint.

Cite this