Science, Technology, Engineering and Mathematics.
Open Access

GRAPH REPRESENTATION LEARNING BASED ON ONE-SHOT AGGREGATION AND SECOND-ORDER INFORMATION

Download as PDF

Volume 6, Issue 2, Pp 46-57, 2024

DOI: 10.61784/jcsee3007

Author(s)

LiNing YuanZhongYu Xing*WanYan Huang

Affiliation(s)

School of Information Technology, Guangxi Police College, Nanning 530028, Guangxi, China.

Corresponding Author

ZhongYu Xing

ABSTRACT

The graph autoencoder has emerged as a proficient model for graph representation learning, demonstrating remarkable efficacy in tasks like link prediction. Nevertheless, most graph autoencoders are characterized by their shallow architecture, leading to diminished efficiency as the number of hidden layers increases. Moreover, these approaches predominantly leverage graph convolutional networks for encoding adjacency matrices and attribute matrices, thereby underutilizing higher-order structural characteristics, such as second-order information. To address these issues, the Variational Graph Autoencoder model OS-SeVAE and the Autoencoder model OS-SeAE, which integrate One-Shot aggregation and second-order information, have been introduced. Initially, deep encoders are formulated by combining graph convolution and second-order graph convolution, alongside the incorporation of One-Shot aggregation and the Exponential Linear Unit (ELU) function. Subsequently, the decoder component employs inner product decoding to reconstruct the graph's topological structure. To prevent overfitting during model training, a regularization term is introduced based on the autoencoder loss function. Experimental results show that One-Shot aggregation and ELU function can effectively improve the performance of deep graph autoencoders, enhance the gradient information propagation of the model, and the introduction of second-order information strengthens the model's representation capability. In link prediction tasks conducted on three benchmark citation datasets, the experimental results of OSA-VAE and OS-SeAE are superior to current state-of-the-art baseline models.

KEYWORDS

Graph representation learning; Graph convolutional network; One-shot aggregation; Second-order information

CITE THIS PAPER

LiNing Yuan, ZhongYu XingWanYan Huang. Graph representation learning based on one-shot aggregation and second-order information. Journal of Computer Science and Electrical Engineering. 2024, 6(2): 46-57. DOI: 10.61784/jcsee3007.

REFERENCES

[1] BKhoshraftar Shima, An Aijun. A survey on graph representation learning methods. ACM Transactions on Intelligent Systems and Technology, 2024, 15(1): 1-55.

[2] LeCun Yann, Bengio Yoshua, Hinton Geoffrey. Deep learning. nature, 2015, 521(7553): 436-444.

[3] Xia Feng, Liu Jiaying, Nie Hansong, et al. Random walks: A review of algorithms and applications. IEEE Transactions on Emerging Topics in Computational Intelligence, 2019, 4(2): 95-107.

[4] Zhang Changqing, Geng Yu, Han Zongbo, et al. Autoencoder in autoencoder networks. IEEE transactions on neural networks and learning systems, 2022, 35(2): 2263-2275.

[5] Wang Yile, Cui Leyang, Zhang Yue. Improving skip-gram embeddings using BERT. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29: 1318-1328.

[6] Zhou Jingya, Liu Ling, Wei Wenqi, et al. Network representation learning: From preprocessing, feature extraction to node embedding. ACM Computing Surveys, 2022, 55(2): 1-35.

[7] Liu Meng, Gao Hongyang, Ji Shuiwang. Towards deeper graph neural networks. Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2020: 338-348.

[8] Li Guohao, Muller Matthias, Thabet Ali, et al. Deepgcns: Can gcns go as deep as cnns?. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2019: 9267-9276.

[9] Li Qimai, Han Zhichao, Wu Xiaoming. Deeper insights into graph convolutional networks for semi-supervised learning. Proceedings of the 32th AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2018:3538–3545.

[10] Zhang Si, Tong Hanghang, Xu Jiejun, et al. Graph convolutional networks: a comprehensive review. Computational Social Networks, 2019, 6(1): 1-23.

[11] Lee Youngwan, Hwang Joongwon, Lee Sangrok, et al. An energy and gpu-computation efficient backbone network for real-time object detection. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Piscataway: IEEE, 2019: 752–760.

[12] Clevert Djork-Arné, Unterthiner Thomas, Hochreiter Sepp. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv: 1511.07289, 2015.

[13] Kullback S., Leibler R. A. On information and sufficiency. Annals of Mathematical Statistics, 1951, 22(1):79-86.

[14] He Kaiming, Zhang Xiangyu, Ren Shaoqing. Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2016: 770-778.

[15] Huang Gao, Liu Zhuang, Maaten Laurens van der. Densely connected convolutional networks. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE, 2017: 4700-4708.

[16] Glorot Xavier, Bordes Antoine, Bengio Yoshua. Deep sparse rectifier neural networks. Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. JMLR: Boston, 2011, 15: 315-323.

[17] Firth J R. A synopsis of linguistic theory, 1930-1955. Studies in Linguistic Analysis, 1957: 1-32.

[18] Defferrard Micha?l, Bresson Xavier, Vandergheynst Pierre. Convolutional neural networks on graphs with fast localized spectral filtering. Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 3844-3852.

[19] Kipf Thomas N., Welling Max. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv: 1609.02907, 2016.

[20] Namata Galileo, London Ben, Getoor Lise, et al. Query-driven active surveying for collective classification. Proceedings of the 10th International Workshop on Mining and Learning with Graphs. New York: ACM, 2012.

[21] Ahn Seong Jin, Kim MyoungHo. Variational Graph Normalized Autoencoders. Proceedings of the 30th ACM International Conference on Information and Knowledge Management. New York: ACM, 2021: 2827-2831.

[22] Salha Guillaume, Hennequin Romain, Vazirgiannis Michalis. Simple and effective graph autoencoders with one-hop linear models. Proceedings of the 2020 Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer: Berlin, 2020: 319-334.

[23] Nallbani Indrit, Ayanzadeh Aydin, Keser Reyhan Kevser, et al. Representation Learning using Graph Autoencoders with Residual Connections. arXiv preprint arXiv: 2105.00695, 2021.

[24] Yuan Lining, Liu Zhao. Graph representation learning by autoencoder with one-shot aggregation. Journal of Computer Applications, 2023, 43(1): 8-14.

[25] Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, et al. Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review. International Journal of Automation and Computing, 2017, 14(05): 503-519.

[26] Cao Shaosheng, Lu Wei, Xu Qiongkai. GraRep: learning graph representations with global structural information. Proceedings of the 24th ACM International Conference on Information and Knowledge Management. New York: ACM, 2015: 891-900.

All published work is licensed under a Creative Commons Attribution 4.0 International License. sitemap
Copyright © 2017 - 2024 Science, Technology, Engineering and Mathematics.   All Rights Reserved.