In this work, we propose a multiplication-less deep convolution neural network, called BD-NET. As far as we know, BD-NET is the first to use binarized depthwise separable convolution block as the drop-in replacement of conventional spatial-convolution in deep convolution neural network (CNN). In BD-NET, the computation-expensive convolution operations (i.e. Multiplication and Accumulation) are converted into hardware-friendly Addition/Subtraction operations. In this work, we first investigate and analyze the performance of BD-NET in terms of accuracy, parameter size and computation cost, w.r.t various network configurations. Then, the experiment results show that our proposed BD-NET with binarized depthwise separable convolution can achieve even higher inference accuracy to its baseline CNN counterpart with full-precision conventional convolution layer on the CIFAR-10 dataset. From the perspective of hardware implementation, the convolution layer of BD-NET achieves up to 97.2%, 88.9%, and 99.4% reduction in terms of computation energy, memory usage, and chip area respectively.