TY - JOUR
T1 - Multi-sample online learning for spiking neural networks based on generalized expectation maximization
AU - Jang, Hyeryung
AU - Simeone, Osvaldo
N1 - Funding Information:
This work was done when H. Jang was with King's College London.
Funding Information:
∗This work was done when H. Jang was with King’s College London. †This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 725731).
Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Spiking Neural Networks (SNNs) offer a novel computational paradigm that captures some of the efficiency of biological brains by processing through binary neural dynamic activations. Probabilistic SNN models are typically trained to maximize the likelihood of the desired outputs by using unbiased estimates of the log-likelihood gradients. While prior work used single-sample estimators obtained from a single run of the network, this paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights. The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient. The approach is based on generalized expectation-maximization (GEM), which optimizes a tighter approximation of the log-likelihood using importance sampling. The derived online learning algorithm implements a three-factor rule with global per-compartment learning signals. Experimental results on a classification task on the neuromorphic MNIST-DVS data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration when increasing the number of compartments used for training and inference.
AB - Spiking Neural Networks (SNNs) offer a novel computational paradigm that captures some of the efficiency of biological brains by processing through binary neural dynamic activations. Probabilistic SNN models are typically trained to maximize the likelihood of the desired outputs by using unbiased estimates of the log-likelihood gradients. While prior work used single-sample estimators obtained from a single run of the network, this paper proposes to leverage multiple compartments that sample independent spiking signals while sharing synaptic weights. The key idea is to use these signals to obtain more accurate statistical estimates of the log-likelihood training criterion, as well as of its gradient. The approach is based on generalized expectation-maximization (GEM), which optimizes a tighter approximation of the log-likelihood using importance sampling. The derived online learning algorithm implements a three-factor rule with global per-compartment learning signals. Experimental results on a classification task on the neuromorphic MNIST-DVS data set demonstrate significant improvements in terms of log-likelihood, accuracy, and calibration when increasing the number of compartments used for training and inference.
KW - Expectation maximization
KW - Neuromorphic computing
KW - Spiking neural networks
KW - Variational learning
UR - http://www.scopus.com/inward/record.url?scp=85115117018&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85115117018&partnerID=8YFLogxK
U2 - 10.1109/ICASSP39728.2021.9414804
DO - 10.1109/ICASSP39728.2021.9414804
M3 - Conference article
AN - SCOPUS:85115117018
SN - 1520-6149
VL - 2021-June
SP - 4080
EP - 4084
JO - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
JF - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
T2 - 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021
Y2 - 6 June 2021 through 11 June 2021
ER -