TY - GEN
T1 - Graph Neural Networks with Adaptive Residual
AU - Liu, Xiaorui
AU - Ding, Jiayuan
AU - Jin, Wei
AU - Xu, Han
AU - Ma, Yao
AU - Liu, Zitao
AU - Tang, Jiliang
N1 - Publisher Copyright:
© 2021 Neural information processing systems foundation. All rights reserved.
PY - 2021
Y1 - 2021
N2 - Graph neural networks (GNNs) have shown the power in graph representation learning for numerous tasks. In this work, we discover an interesting phenomenon that although residual connections in the message passing of GNNs help improve the performance, they immensely amplify GNNs’ vulnerability against abnormal node features. This is undesirable because in real-world applications, node features in graphs could often be abnormal such as being naturally noisy or adversarially manipulated. We analyze possible reasons to understand this phenomenon and aim to design GNNs with stronger resilience to abnormal features. Our understandings motivate us to propose and derive a simple, efficient, interpretable, and adaptive message passing scheme, leading to a novel GNN with Adaptive residual, AirGNN1. Extensive experiments under various abnormal feature scenarios demonstrate the effectiveness of the proposed algorithm.
AB - Graph neural networks (GNNs) have shown the power in graph representation learning for numerous tasks. In this work, we discover an interesting phenomenon that although residual connections in the message passing of GNNs help improve the performance, they immensely amplify GNNs’ vulnerability against abnormal node features. This is undesirable because in real-world applications, node features in graphs could often be abnormal such as being naturally noisy or adversarially manipulated. We analyze possible reasons to understand this phenomenon and aim to design GNNs with stronger resilience to abnormal features. Our understandings motivate us to propose and derive a simple, efficient, interpretable, and adaptive message passing scheme, leading to a novel GNN with Adaptive residual, AirGNN1. Extensive experiments under various abnormal feature scenarios demonstrate the effectiveness of the proposed algorithm.
UR - http://www.scopus.com/inward/record.url?scp=85125027645&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85125027645&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85125027645
T3 - Advances in Neural Information Processing Systems
SP - 9720
EP - 9733
BT - Advances in Neural Information Processing Systems 34 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
A2 - Ranzato, Marc'Aurelio
A2 - Beygelzimer, Alina
A2 - Dauphin, Yann
A2 - Liang, Percy S.
A2 - Wortman Vaughan, Jenn
PB - Neural information processing systems foundation
T2 - 35th Conference on Neural Information Processing Systems, NeurIPS 2021
Y2 - 6 December 2021 through 14 December 2021
ER -