Abstract
Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning. Although KGNN effectively models the structural information from knowledge graphs, these frameworks amplify the underlying data bias that leads to discrimination towards certain groups or individuals in resulting applications. Additionally, as existing debiasing approaches mainly focus on entity-wise bias, eliminating the multi-hop relational bias that pervasively exists in knowledge graphs remains an open question. However, it is very challenging to eliminate relational bias due to the sparsity of the paths that generate the bias and the non-linear proximity structure of knowledge graphs. To tackle the challenges, we propose Fair-KGNN, a KGNN framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs. The proposed framework is generalizable to mitigate relational bias for all types of KGNN. Fair-KGNN is applicable to incorporate two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias. The experiments carried out on three benchmark knowledge graph datasets demonstrate that Fair-KGNN can effectively mitigate unfair situations during representation learning while preserving the predictive performance of KGNN models. The source code of the proposed method is available at: https://github.com/ynchuang/Mitigating-Relational-Bias-on-Knowledge-Graphs.
Original language | English (US) |
---|---|
Article number | 46 |
Journal | ACM Transactions on Knowledge Discovery from Data |
Volume | 19 |
Issue number | 2 |
DOIs | |
State | Published - Feb 14 2025 |
All Science Journal Classification (ASJC) codes
- General Computer Science
Keywords
- Knowledge Graph Debiasing
- Machine Learning Fairness
- Relational Bias