TY - GEN
T1 - Learning First-to-Spike Policies for Neuromorphic Control Using Policy Gradients
AU - Rosenfeld, Bleema
AU - Simeone, Osvaldo
AU - Rajendran, Bipin
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - Artificial Neural Networks (ANNs) are currently being used as function approximators in many state-of-the-art Reinforcement Learning (RL) algorithms. Spiking Neural Networks (SNNs) have been shown to drastically reduce the energy consumption of ANNs by encoding information in sparse temporal binary spike streams, hence emulating the communication mechanism of biological neurons. Due to their low energy consumption, SNNs are considered to be important candidates as co-processors to be implemented in mobile devices. In this work, the use of SNNs as stochastic policies is explored under an energy-efficient first-to-spike action rule, whereby the action taken by the RL agent is determined by the occurrence of the first spike among the output neurons. A policy gradient-based algorithm is derived considering a Generalized Linear Model (GLM) for spiking neurons. Experimental results demonstrate the capability of online trained SNNs as stochastic policies to gracefully trade energy consumption, as measured by the number of spikes, and control performance. Significant gains are shown as compared to the standard approach of converting an offline trained ANN into an SNN.
AB - Artificial Neural Networks (ANNs) are currently being used as function approximators in many state-of-the-art Reinforcement Learning (RL) algorithms. Spiking Neural Networks (SNNs) have been shown to drastically reduce the energy consumption of ANNs by encoding information in sparse temporal binary spike streams, hence emulating the communication mechanism of biological neurons. Due to their low energy consumption, SNNs are considered to be important candidates as co-processors to be implemented in mobile devices. In this work, the use of SNNs as stochastic policies is explored under an energy-efficient first-to-spike action rule, whereby the action taken by the RL agent is determined by the occurrence of the first spike among the output neurons. A policy gradient-based algorithm is derived considering a Generalized Linear Model (GLM) for spiking neurons. Experimental results demonstrate the capability of online trained SNNs as stochastic policies to gracefully trade energy consumption, as measured by the number of spikes, and control performance. Significant gains are shown as compared to the standard approach of converting an offline trained ANN into an SNN.
KW - Neuromorphic Computing
KW - Policy Gradient
KW - Reinforcement Learning
KW - Spiking Neural Network
UR - http://www.scopus.com/inward/record.url?scp=85072348835&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072348835&partnerID=8YFLogxK
U2 - 10.1109/SPAWC.2019.8815546
DO - 10.1109/SPAWC.2019.8815546
M3 - Conference contribution
AN - SCOPUS:85072348835
T3 - IEEE Workshop on Signal Processing Advances in Wireless Communications, SPAWC
BT - 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2019
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 20th IEEE International Workshop on Signal Processing Advances in Wireless Communications, SPAWC 2019
Y2 - 2 July 2019 through 5 July 2019
ER -