Homophily-aware Supervised Contrastive Counterfactual Augmented Fair Graph Neural Network
Novel two-phase training edits graph structure to reduce bias while improving classification performance.
A research team led by Mahdi Tavassoli Kejani has developed a novel Graph Neural Network (GNN) framework specifically designed to address fairness concerns in graph-based machine learning. Their model, called the Homophily-aware Supervised Contrastive Counterfactual Augmented Fair Graph Neural Network, tackles the critical problem where GNNs can perpetuate or amplify biases present not just in node data but in the very structure of the connections (the graph) itself. The core innovation is a two-phase training strategy that first intelligently edits the graph to increase connections between nodes with similar target labels while reducing connections based on sensitive attributes like gender or race.
In the second phase, the model employs a modified supervised contrastive loss alongside an environmental loss during optimization. This dual approach allows the neural network to simultaneously learn accurate predictions for tasks like node classification while actively minimizing unfair outcomes across different demographic groups. The researchers validated their approach on five real-world datasets, demonstrating that it outperforms the previous state-of-the-art Counterfactual Augmented Fair (CAF) framework and other graph learning methods. The results show measurable improvements in key fairness metrics without sacrificing, and sometimes even enhancing, overall classification accuracy, marking a significant step toward more trustworthy and equitable AI systems for social network analysis, recommendation engines, and fraud detection.
- Two-phase training first edits graph structure to reduce bias from homophily (the tendency for similar nodes to connect), then optimizes for both accuracy and fairness.
- Integrates a modified supervised contrastive loss and environmental loss, allowing the model to learn robust representations that are invariant to sensitive attributes.
- Outperformed the previous CAF framework and other methods on five real-world datasets, improving fairness metrics while maintaining high classification accuracy.
Why It Matters
Enables development of fairer AI for critical applications like loan approvals, hiring, and content moderation by reducing bias in graph-based predictions.