TY - GEN
T1 - A 45nm CMOS neuromorphic chip with a scalable architecture for learning in networks of spiking neurons
AU - Seo, Jae Sun
AU - Brezzo, Bernard
AU - Liu, Yong
AU - Parker, Benjamin D.
AU - Esser, Steven K.
AU - Montoye, Robert K.
AU - Rajendran, Bipin
AU - Tierno, José A.
AU - Chang, Leland
AU - Modha, Dharmendra S.
AU - Friedman, Daniel J.
PY - 2011
Y1 - 2011
N2 - Efforts to achieve the long-standing dream of realizing scalable learning algorithms for networks of spiking neurons in silicon have been hampered by (a) the limited scalability of analog neuron circuits; (b) the enormous area overhead of learning circuits, which grows with the number of synapses; and (c) the need to implement all inter-neuron communication via off-chip address-events. In this work, a new architecture is proposed to overcome these challenges by combining innovations in computation, memory, and communication, respectively, to leverage (a) robust digital neuron circuits; (b) novel transposable SRAM arrays that share learning circuits, which grow only with the number of neurons; and (c) crossbar fan-out for efficient on-chip inter-neuron communication. Through tight integration of memory (synapses) and computation (neurons), a highly configurable chip comprising 256 neurons and 64K binary synapses with on-chip learning based on spike-timing dependent plasticity is demonstrated in 45nm SOI-CMOS. Near-threshold, event-driven operation at 0.53V is demonstrated to maximize power efficiency for real-time pattern classification, recognition, and associative memory tasks. Future scalable systems built from the foundation provided by this work will open up possibilities for ubiquitous ultra-dense, ultra-low power brain-like cognitive computers.
AB - Efforts to achieve the long-standing dream of realizing scalable learning algorithms for networks of spiking neurons in silicon have been hampered by (a) the limited scalability of analog neuron circuits; (b) the enormous area overhead of learning circuits, which grows with the number of synapses; and (c) the need to implement all inter-neuron communication via off-chip address-events. In this work, a new architecture is proposed to overcome these challenges by combining innovations in computation, memory, and communication, respectively, to leverage (a) robust digital neuron circuits; (b) novel transposable SRAM arrays that share learning circuits, which grow only with the number of neurons; and (c) crossbar fan-out for efficient on-chip inter-neuron communication. Through tight integration of memory (synapses) and computation (neurons), a highly configurable chip comprising 256 neurons and 64K binary synapses with on-chip learning based on spike-timing dependent plasticity is demonstrated in 45nm SOI-CMOS. Near-threshold, event-driven operation at 0.53V is demonstrated to maximize power efficiency for real-time pattern classification, recognition, and associative memory tasks. Future scalable systems built from the foundation provided by this work will open up possibilities for ubiquitous ultra-dense, ultra-low power brain-like cognitive computers.
UR - http://www.scopus.com/inward/record.url?scp=80455156136&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80455156136&partnerID=8YFLogxK
U2 - 10.1109/CICC.2011.6055293
DO - 10.1109/CICC.2011.6055293
M3 - Conference contribution
AN - SCOPUS:80455156136
SN - 9781457702228
T3 - Proceedings of the Custom Integrated Circuits Conference
BT - 2011 IEEE Custom Integrated Circuits Conference, CICC 2011
T2 - 33rd Annual Custom Integrated Circuits Conference - The Showcase for Circuit Design in the Heart of Silicon Valley, CICC 2011
Y2 - 19 September 2011 through 21 September 2011
ER -