TY - GEN
T1 - Enhanced Knowledge Graph Attention Networks for Efficient Graph Learning
AU - Buschmann, Fernando Vera
AU - Du, Zhihui
AU - Bader, David
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - This paper presents an innovative design for Enhanced Knowledge Graph Attention Networks (EKGAT), which focuses on improving representation learning to analyze more complex relationships of graph-structured data. By integrating TransformerConv layers, the proposed EKGAT model excels in capturing complex node relationships compared to traditional KGAT models. Additionally, our EKGAT model integrates disentanglement learning techniques to segment entity representations into independent components, thereby capturing various semantic aspects more effectively. Comprehensive experiments on the Cora, PubMed, and Amazon datasets reveal substantial improvements in node classification accuracy and convergence speed. The incorporation of TransformerConv layers significantly accelerates the convergence of the training loss function while either maintaining or enhancing accuracy, which is particularly advantageous for large-scale, real-time applications. Results from t-SNE and PCA analyses vividly illustrate the superior embedding separability achieved by our model, underscoring its enhanced representation capabilities. These findings highlight the potential of EKGAT to advance graph analytics and network science, providing robust, scalable solutions for a wide range of applications, from recommendation systems and social network analysis to biomedical data interpretation and real-time big data processing.
AB - This paper presents an innovative design for Enhanced Knowledge Graph Attention Networks (EKGAT), which focuses on improving representation learning to analyze more complex relationships of graph-structured data. By integrating TransformerConv layers, the proposed EKGAT model excels in capturing complex node relationships compared to traditional KGAT models. Additionally, our EKGAT model integrates disentanglement learning techniques to segment entity representations into independent components, thereby capturing various semantic aspects more effectively. Comprehensive experiments on the Cora, PubMed, and Amazon datasets reveal substantial improvements in node classification accuracy and convergence speed. The incorporation of TransformerConv layers significantly accelerates the convergence of the training loss function while either maintaining or enhancing accuracy, which is particularly advantageous for large-scale, real-time applications. Results from t-SNE and PCA analyses vividly illustrate the superior embedding separability achieved by our model, underscoring its enhanced representation capabilities. These findings highlight the potential of EKGAT to advance graph analytics and network science, providing robust, scalable solutions for a wide range of applications, from recommendation systems and social network analysis to biomedical data interpretation and real-time big data processing.
KW - Disentanglement Learning
KW - Knowledge Graph Attention Networks
KW - Representation Learning
KW - TransformerConv
UR - http://www.scopus.com/inward/record.url?scp=105002709899&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105002709899&partnerID=8YFLogxK
U2 - 10.1109/HPEC62836.2024.10938526
DO - 10.1109/HPEC62836.2024.10938526
M3 - Conference contribution
AN - SCOPUS:105002709899
T3 - 2024 IEEE High Performance Extreme Computing Conference, HPEC 2024
BT - 2024 IEEE High Performance Extreme Computing Conference, HPEC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE High Performance Extreme Computing Conference, HPEC 2024
Y2 - 23 September 2024 through 27 September 2024
ER -