TY - GEN
T1 - Stochastic deep learning in memristive network s
AU - Babu, Anakha V.
AU - Rajendran, Bipin
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/7/2
Y1 - 2017/7/2
N2 - We study the performance of stochastically trained deep neural networks (DNNs) whose synaptic weights are implemented using emerging memristive devices that exhibit limited dynamic range, resolution, and variability in their programming characteristics. We show that a key device parameter to optimize the learning efficiency of DNNs is the variability in its programming characteristics. DNNs with such memristive synapses, even with dynamic range as low as 15 and only 32 discrete levels, when trained based on stochastic updates suffer less than 3% loss in accuracy compared to floating point software baseline. We also study the performance of stochastic memristive DNNs when used as inference engines with noise corrupted data and find that if the device variability can be minimized, the relative degradation in performance for the Stochastic DNN is better than that of the software baseline. Hence, our study presents a new optimization corner for memristive devices for building large noise-immune deep learning systems.
AB - We study the performance of stochastically trained deep neural networks (DNNs) whose synaptic weights are implemented using emerging memristive devices that exhibit limited dynamic range, resolution, and variability in their programming characteristics. We show that a key device parameter to optimize the learning efficiency of DNNs is the variability in its programming characteristics. DNNs with such memristive synapses, even with dynamic range as low as 15 and only 32 discrete levels, when trained based on stochastic updates suffer less than 3% loss in accuracy compared to floating point software baseline. We also study the performance of stochastic memristive DNNs when used as inference engines with noise corrupted data and find that if the device variability can be minimized, the relative degradation in performance for the Stochastic DNN is better than that of the software baseline. Hence, our study presents a new optimization corner for memristive devices for building large noise-immune deep learning systems.
UR - http://www.scopus.com/inward/record.url?scp=85047238156&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85047238156&partnerID=8YFLogxK
U2 - 10.1109/ICECS.2017.8292067
DO - 10.1109/ICECS.2017.8292067
M3 - Conference contribution
AN - SCOPUS:85047238156
T3 - ICECS 2017 - 24th IEEE International Conference on Electronics, Circuits and Systems
SP - 214
EP - 217
BT - ICECS 2017 - 24th IEEE International Conference on Electronics, Circuits and Systems
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 24th IEEE International Conference on Electronics, Circuits and Systems, ICECS 2017
Y2 - 5 December 2017 through 8 December 2017
ER -