Abstract
Herein, we develop a programmable energy-efficient hardware implementation for Recurrent Neural Networks (RNNs) with Resistive Random-Access Memory (ReRAM) synapses and ultra-low power, area-efficient spin-based activation functions. To attain high energy-efficiency while maintaining accuracy, a novel Computing-in-Memory (CiM) architecture is proposed to leverage data-level parallelism during the evaluation phase. We employ an MRAM-based Adjustable Probabilistic Activation Function (APAF) via a low-power tunable activation mechanism, providing adjustable levels of accuracy to mimic ideal sigmoid and tanh thresholding along with a matching algorithm to regulate the neuron properties. Our hardware/software cross-layer simulation shows that our proposed design achieves up to 74.5× energy-efficiency with ∼11× area reduction compared to its counterpart designs while keeping the accuracy comparable.
Original language | English (US) |
---|---|
Pages (from-to) | 534-540 |
Number of pages | 7 |
Journal | IEEE Transactions on Emerging Topics in Computing |
Volume | 11 |
Issue number | 2 |
DOIs | |
State | Published - Apr 1 2023 |
All Science Journal Classification (ASJC) codes
- Computer Science (miscellaneous)
- Information Systems
- Human-Computer Interaction
- Computer Science Applications
Keywords
- Binary stochastic neuron
- computing-in-memory
- recurrent neural networks
- spintronics