TY - JOUR
T1 - Mixed-Precision Deep Learning Based on Computational Memory
AU - Nandakumar, S. R.
AU - Le Gallo, Manuel
AU - Piveteau, Christophe
AU - Joshi, Vinay
AU - Mariani, Giovanni
AU - Boybat, Irem
AU - Karunaratne, Geethan
AU - Khaddam-Aljameh, Riduan
AU - Egger, Urs
AU - Petropoulos, Anastasios
AU - Antonakopoulos, Theodore
AU - Rajendran, Bipin
AU - Sebastian, Abu
AU - Eleftheriou, Evangelos
N1 - Publisher Copyright:
© Copyright © 2020 Nandakumar, Le Gallo, Piveteau, Joshi, Mariani, Boybat, Karunaratne, Khaddam-Aljameh, Egger, Petropoulos, Antonakopoulos, Rajendran, Sebastian and Eleftheriou.
PY - 2020/5/12
Y1 - 2020/5/12
N2 - Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.
AB - Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.
KW - deep learning
KW - in-memory computing
KW - memristive devices
KW - mixed-signal design
KW - phase-change memory
UR - http://www.scopus.com/inward/record.url?scp=85085060927&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85085060927&partnerID=8YFLogxK
U2 - 10.3389/fnins.2020.00406
DO - 10.3389/fnins.2020.00406
M3 - Article
AN - SCOPUS:85085060927
SN - 1662-4548
VL - 14
JO - Frontiers in Neuroscience
JF - Frontiers in Neuroscience
M1 - 406
ER -