TY - GEN
T1 - Recurrent attention walk for semi-supervised classification
AU - Akujuobi, Uchenna Thankgod
AU - Zhang, Qiannan
AU - Yufei, Han
AU - Zhang, Xiangliang
N1 - KAUST Repository Item: Exported on 2020-10-01
Acknowledged KAUST grant number(s): FCC/1/1976-19-01
Acknowledgements: This work was partially supported and funded by King Abdullah University of Science and Technology (KAUST), under award number FCC/1/1976-19-01, and NSFC No. 61828302.
PY - 2020/1/22
Y1 - 2020/1/22
N2 - In this paper, we study the graph-based semi-supervised learning for classifying nodes in attributed networks, where the nodes and edges possess content information. Recent approaches like graph convolution networks and attention mechanisms have been proposed to ensemble the first-order neighbors and incorporate the relevant neighbors. However, it is costly (especially in memory) to consider all neighbors without a prior differentiation. We propose to explore the neighborhood in a reinforcement learning setting and find a walk path well-tuned for classifying the unlabelled target nodes. We let an agent (of node classification task) walk over the graph and decide where to move to maximize classification accuracy. We define the graph walk as a partially observable Markov decision process (POMDP). The proposed method is flexible for working in both transductive and inductive setting. Extensive experiments on four datasets demonstrate that our proposed method outperforms several state-of-the-art methods. Several case studies also illustrate the meaningful movement trajectory made by the agent.
AB - In this paper, we study the graph-based semi-supervised learning for classifying nodes in attributed networks, where the nodes and edges possess content information. Recent approaches like graph convolution networks and attention mechanisms have been proposed to ensemble the first-order neighbors and incorporate the relevant neighbors. However, it is costly (especially in memory) to consider all neighbors without a prior differentiation. We propose to explore the neighborhood in a reinforcement learning setting and find a walk path well-tuned for classifying the unlabelled target nodes. We let an agent (of node classification task) walk over the graph and decide where to move to maximize classification accuracy. We define the graph walk as a partially observable Markov decision process (POMDP). The proposed method is flexible for working in both transductive and inductive setting. Extensive experiments on four datasets demonstrate that our proposed method outperforms several state-of-the-art methods. Several case studies also illustrate the meaningful movement trajectory made by the agent.
UR - http://hdl.handle.net/10754/660639
UR - https://dl.acm.org/doi/10.1145/3336191.3371853
UR - http://www.scopus.com/inward/record.url?scp=85079547405&partnerID=8YFLogxK
U2 - 10.1145/3336191.3371853
DO - 10.1145/3336191.3371853
M3 - Conference contribution
SN - 9781450368223
SP - 16
EP - 24
BT - Proceedings of the 13th International Conference on Web Search and Data Mining
PB - ACM
ER -