Abstract
Spiking neural networks (SNNs) are distributed trainable systems whose computing elements, or neurons, are characterized by internal analog dynamics and by digital and sparse synaptic communications. The sparsity of the synaptic spiking inputs and the corresponding event-driven nature of neural processing can be leveraged by energy-efficient hardware implementations, which can offer significant energy reductions as compared to conventional artificial neural networks (ANNs). The design of training algorithms for SNNs, however, lags behind hardware implementations: most existing training algorithms for SNNs have been designed either for biological plausibility or through conversion from pretrained ANNs via rate encoding.
| Original language | English (US) |
|---|---|
| Article number | 8891810 |
| Pages (from-to) | 64-77 |
| Number of pages | 14 |
| Journal | IEEE Signal Processing Magazine |
| Volume | 36 |
| Issue number | 6 |
| DOIs | |
| State | Published - Nov 2019 |
All Science Journal Classification (ASJC) codes
- Signal Processing
- Electrical and Electronic Engineering
- Applied Mathematics