TY - GEN
T1 - Stochastic Spiking Attention
T2 - 6th IEEE International Conference on AI Circuits and Systems, AICAS 2024
AU - Song, Zihang
AU - Katti, Prabodh
AU - Simeone, Osvaldo
AU - Rajendran, Bipin
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Spiking Neural Networks (SNNs) have been recently integrated into Transformer architectures due to their potential to reduce computational demands and to improve power efficiency. Yet, the implementation of the attention mechanism using spiking signals on general-purpose computing platforms remains ineffi-cient. In this paper, we propose a novel framework leveraging stochastic computing (SC) to effectively execute the dot-product attention for SNN-based Transformers. We demonstrate that our approach can achieve high classification accuracy (83.53%) on CIFAR-10 within 10 time steps, which is comparable to the performance of a baseline artificial neural network implementation (83.66%). We estimate that the proposed SC approach can lead to over 6.3× reduction in computing energy and 1.7× reduction in memory access costs for a digital CMOS-based ASIC design. We experimentally validate our stochastic attention block design through an FPGA implementation, which is shown to achieve 48× lower latency as compared to a GPU implementation, while consuming 15× less power.
AB - Spiking Neural Networks (SNNs) have been recently integrated into Transformer architectures due to their potential to reduce computational demands and to improve power efficiency. Yet, the implementation of the attention mechanism using spiking signals on general-purpose computing platforms remains ineffi-cient. In this paper, we propose a novel framework leveraging stochastic computing (SC) to effectively execute the dot-product attention for SNN-based Transformers. We demonstrate that our approach can achieve high classification accuracy (83.53%) on CIFAR-10 within 10 time steps, which is comparable to the performance of a baseline artificial neural network implementation (83.66%). We estimate that the proposed SC approach can lead to over 6.3× reduction in computing energy and 1.7× reduction in memory access costs for a digital CMOS-based ASIC design. We experimentally validate our stochastic attention block design through an FPGA implementation, which is shown to achieve 48× lower latency as compared to a GPU implementation, while consuming 15× less power.
KW - attention
KW - hardware accelerator
KW - Spiking neural network
KW - stochastic computing
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=85199909314&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85199909314&partnerID=8YFLogxK
U2 - 10.1109/AICAS59952.2024.10595893
DO - 10.1109/AICAS59952.2024.10595893
M3 - Conference contribution
AN - SCOPUS:85199909314
T3 - 2024 IEEE 6th International Conference on AI Circuits and Systems, AICAS 2024 - Proceedings
SP - 31
EP - 35
BT - 2024 IEEE 6th International Conference on AI Circuits and Systems, AICAS 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 22 April 2024 through 25 April 2024
ER -