TY - GEN
T1 - Neuro-inspired Enhancing Spiking Graph Convolutional Networks
AU - Buschmann, Fernando Vera
AU - Rotstein, Horacio
AU - Oria, Vincent
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - We introduce Enhanced GC-SNN, the first integration of Leaky Integrate-and-Fire (LIF) neurons with graph convolutional networks for link prediction. Our architecture incorporates temporal spike dynamics and novel Spatial-Temporal Feature Normalization (STFN) to enable event-driven graph processing. Unlike static GNN activations, LIF neurons provide membrane-based temporal integration, while STFN stabilizes spiking activity through homeostatic regulation.Evaluation on citation networks reveals mixed performance characteristics: Enhanced GC-SNN achieves superior results on large, sparse graphs (PubMed: 0.937 ROC-AUC vs. 0.917 GCN) but underperforms on smaller, denser networks (Cora: 0.884 vs. 0.932 GCN; Citeseer: 0.877 vs. 0.931 GCN). The temporal processing requires 5.86× more operations than static baselines due to 40-timestep recurrence, though 65.3% neuron sparsity suggests potential for neuromorphic acceleration.Our primary contribution establishes a technical framework bridging spiking neural computation with graph learning, introducing STFN normalization and spike-based pairwise decoding. While computational overhead limits immediate practical deployment, the architecture provides a foundation for future neuromorphic graph processing and offers research directions toward hardware-efficient, temporally-aware graph neural networks. The work demonstrates both the potential and current limitations of biologically-inspired approaches in graph machine learning1.
AB - We introduce Enhanced GC-SNN, the first integration of Leaky Integrate-and-Fire (LIF) neurons with graph convolutional networks for link prediction. Our architecture incorporates temporal spike dynamics and novel Spatial-Temporal Feature Normalization (STFN) to enable event-driven graph processing. Unlike static GNN activations, LIF neurons provide membrane-based temporal integration, while STFN stabilizes spiking activity through homeostatic regulation.Evaluation on citation networks reveals mixed performance characteristics: Enhanced GC-SNN achieves superior results on large, sparse graphs (PubMed: 0.937 ROC-AUC vs. 0.917 GCN) but underperforms on smaller, denser networks (Cora: 0.884 vs. 0.932 GCN; Citeseer: 0.877 vs. 0.931 GCN). The temporal processing requires 5.86× more operations than static baselines due to 40-timestep recurrence, though 65.3% neuron sparsity suggests potential for neuromorphic acceleration.Our primary contribution establishes a technical framework bridging spiking neural computation with graph learning, introducing STFN normalization and spike-based pairwise decoding. While computational overhead limits immediate practical deployment, the architecture provides a foundation for future neuromorphic graph processing and offers research directions toward hardware-efficient, temporally-aware graph neural networks. The work demonstrates both the potential and current limitations of biologically-inspired approaches in graph machine learning1.
KW - Biological Neural Networks
KW - Graph Neural Networks
KW - Link Prediction
KW - Spiking Neural Networks
UR - https://www.scopus.com/pages/publications/105021463506
UR - https://www.scopus.com/pages/publications/105021463506#tab=citedBy
U2 - 10.1109/HPEC67600.2025.11196167
DO - 10.1109/HPEC67600.2025.11196167
M3 - Conference contribution
AN - SCOPUS:105021463506
T3 - 2025 IEEE High Performance Extreme Computing Conference, HPEC 2025
BT - 2025 IEEE High Performance Extreme Computing Conference, HPEC 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE High Performance Extreme Computing Conference, HPEC 2025
Y2 - 15 September 2025 through 19 September 2025
ER -