TY - GEN
T1 - Cross-View Graph Consistency Learning for Invariant Graph Representations
AU - Chen, Jie
AU - Mao, Hua
AU - Woo, Wai Lok
AU - Liu, Chuanbin
AU - Peng, Xi
PY - 2025/4/11
Y1 - 2025/4/11
N2 - Graph representation learning is fundamental for analyzing graph-structured data. Exploring invariant graph representations remains a challenge for most existing graph representation learning methods. In this paper, we propose a cross-view graph consistency learning (CGCL) method that learns invariant graph representations for link prediction. First, two complementary augmented views are derived from an incomplete graph structure through a coupled graph structure augmentation scheme. This augmentation scheme mitigates the potential information loss that is commonly associated with various data augmentation techniques involving raw graph data, such as edge perturbation, node removal, and attribute masking. Second, we propose a CGCL model that can learn invariant graph representations. A cross-view training scheme is proposed to train the proposed CGCL model. This scheme attempts to maximize the consistency information between one augmented view and the graph structure reconstructed from the other augmented view. Furthermore, we offer a comprehensive theoretical CGCL analysis. This paper empirically and experimentally demonstrates the effectiveness of the proposed CGCL method, achieving competitive results on graph datasets in comparisons with several state-of-the-art algorithms.
AB - Graph representation learning is fundamental for analyzing graph-structured data. Exploring invariant graph representations remains a challenge for most existing graph representation learning methods. In this paper, we propose a cross-view graph consistency learning (CGCL) method that learns invariant graph representations for link prediction. First, two complementary augmented views are derived from an incomplete graph structure through a coupled graph structure augmentation scheme. This augmentation scheme mitigates the potential information loss that is commonly associated with various data augmentation techniques involving raw graph data, such as edge perturbation, node removal, and attribute masking. Second, we propose a CGCL model that can learn invariant graph representations. A cross-view training scheme is proposed to train the proposed CGCL model. This scheme attempts to maximize the consistency information between one augmented view and the graph structure reconstructed from the other augmented view. Furthermore, we offer a comprehensive theoretical CGCL analysis. This paper empirically and experimentally demonstrates the effectiveness of the proposed CGCL method, achieving competitive results on graph datasets in comparisons with several state-of-the-art algorithms.
U2 - 10.1609/aaai.v39i15.33734
DO - 10.1609/aaai.v39i15.33734
M3 - Conference contribution
T3 - Proceedings of the AAAI Conference on Artificial Intelligence
SP - 15795
EP - 15802
BT - Proceedings of the 39th Annual AAAI Conference on Artificial Intelligence
A2 - Walsh, Toby
A2 - Shah, Shah
A2 - Kolter, Zico
PB - Association for the Advancement of Artificial Intelligence (AAAI)
CY - Washington, DC, United States
ER -