首页 | 官方网站   微博 | 高级检索  
     

图网络层级信息挖掘分类算法综述
引用本文:魏文超,蔺广逢,廖开阳,康晓兵,赵凡.图网络层级信息挖掘分类算法综述[J].中国图象图形学报,2022,27(10):2916-2936.
作者姓名:魏文超  蔺广逢  廖开阳  康晓兵  赵凡
作者单位:西安理工大学印刷包装与数字媒体学院, 西安 710048
基金项目:国家自然科学基金项目(61771386);陕西省重点研发计划资助(2020SF-359);陕西省自然科学基础研究计划项目(2021JM-340)
摘    要:深度学习作为人工智能的一个研究分支发展迅速,而研究数据主要是语音、图像和视频等,这些具有规则结构的数据通常在欧氏空间中表示。然而许多学习任务需要处理的数据是从非欧氏空间中生成,这些数据特征和其关系结构可以用图来定义。图卷积神经网络通过将卷积定理应用于图,完成节点之间的信息传播与聚合,成为建模图数据一种有效的方法。尽管图卷积神经网络取得了巨大成功,但针对图任务中的节点分类问题,由于深层图结构优化的特有难点——过平滑现象,现有的多数模型都只有两三层的浅层模型架构。在理论上,图卷积神经网络的深层结构可以获得更多节点表征信息,因此针对其层级信息进行研究,将层级结构算法迁移到图数据分析的核心在于图层级卷积算子构建和图层级间信息融合。本文对图网络层级信息挖掘算法进行综述,介绍图神经网络的发展背景、存在问题以及图卷积神经网络层级结构算法的发展,根据不同图卷积层级信息处理将现有算法分为正则化方法和架构调整方法。正则化方法通过重新构建图卷积算子更好地聚合邻域信息,而架构调整方法则融合层级信息丰富节点表征。图卷积神经网络层级特性实验表明,图结构中存在层级特性节点,现有图层级信息挖掘算法仍未对层级特性节点的图信息进行完全探索。最后,总结了图卷积神经网络层级信息挖掘模型的主要应用领域,并从计算效率、大规模数据、动态图和应用场景等方面提出进一步研究的方向。

关 键 词:层级结构  图卷积神经网络(GCN)  注意力机制  人工智能  深度学习
收稿时间:2021/4/16 0:00:00
修稿时间:2022/7/7 0:00:00

Survey of graph network hierarchical information mining for classification
Wei Wenchao,Lin Guangfeng,Liao Kaiyang,Kang Xiaobing,Zhao Fan.Survey of graph network hierarchical information mining for classification[J].Journal of Image and Graphics,2022,27(10):2916-2936.
Authors:Wei Wenchao  Lin Guangfeng  Liao Kaiyang  Kang Xiaobing  Zhao Fan
Affiliation:School of Printing, Packaging and Digital Media, Xi''an University of Technology, Xi''an 710048, China
Abstract:Deep learning based data format is mainly related to voice, image and video. A regular data structure is usually represented in Euclidean space. The graph structure representation of non-Euclidean data can be used more widely in practice. A typical data structure graphs can describe the features of objects and the relationship between objects simultaneously. Therefore, the application research of graphs in deep learning has been focused on as a future research direction. However, learning data are generated from non-Euclidean spaces in common, and these data features and their relational structure can be defined by graphs. The graph neural network is facilitated to extend the neural network via process data processing in the graph domain. Each node is clarified by updating the expression of the node related to the neighbor node and the edge structure, which designates a research framework for subsequent research. With the help of the graph neural network framework, the graph convolutional neural network defines the updating functions on the convolutional aspect, information dissemination and aggregation between nodes is completed in terms of the convolution theorem to the graph signal. The graph convolutional neural network becomes an effective way of graph data related modeling. Most existing models have only two or three layers of shallow model architecture due to the challenge of over-smoothing phenomenon of deep layer graph structure. The over-smoothing phenomenon is that the graph convolutional neural network fuses node features smoothly via replicable Laplacian applications from different neighborhoods. The smoothing operation makes the vertex features of the same cluster be similar, which simplifies the classification task. But, the expression of each node tends to converge to a certain value in the graph when the number of layers is deepened, and the vertices become indistinguishable in different clusters. Existing researches have shown that graph convolution is similar to a local filter, which is a linear combination of feature vectors of neighboring neighbors. A shallow graph convolutional neural network cannot transmit the full label information to the entire graph with only a few labels, which cannot explore the global graph structure in the graph convolutional neural network. At the same time, deep graph convolutional neural networks require a larger receiving domain. One of the key advantages provided by deep architecture in computer vision is that they can compose complex functions from simple structures. Inspired by the convolutional neural network, the deep structure of the graph convolutional neural network can obtain more node representation information theoretically, so many researchers conduct in-depth research on its hierarchical information. According to the aggregation and transmission features of the graph convolutional neural network algorithm, the core of transferring hierarchical structure algorithms to graph data analysis lies in the construction of layer-level convolution operators and the fusion of information between layer levels. We review the graph network level information mining algorithms. First, we discuss the current situation and existing issues of graph convolutional neural networks, and then introduce the development of graph convolutional neural network hierarchy algorithms, and propose a new category method in term of the different layer information processing of graph convolution. Existing algorithms are divided into regularization methods and architecture adjustment methods. Regular methods focus on the construction of layer-level convolution operators. To deepen the graph neural network and slow down the occurrence of over-smoothing graph structure relationships are used to constrain the information transmission in the convolution process. Next, architecture adjustment methods are based on information fusion between levels to enrich the representation of nodes, including various residual connections like knowledge jumps or affine skip connections. At the same time, we demonstrate that the hierarchical feature nodes are obtained in the graph structure through the hierarchical features experiment. The hierarchical feature nodes can only be classified by the graph convolutional neural network at the corresponding depth. The graph network hierarchical information mining algorithm uses different data mining methods to classify different characteristic nodes via a unified model and an end-to-end paradigm. If there is a model that can classify hierarchical feature nodes at each level successfully, the task of graph network node classification will make good result. Finally, the main application fields of the graph convolutional neural network hierarchical information mining model are summarized, and the future research direction are predicted from four aspects of computing efficiency, large-scale data, dynamic graphs and application scenarios. Graph network hierarchical information mining is a deeper exploration of graph neural networks. The hierarchical information interaction, transfer and dynamic evolution can obtain the richer node information from the shallow to deep information of the graph neural network. We clarify the adopted issue of deep structured graph neural networks further. The effectiveness of information mining issue between levels has to be developed further although some of hierarchical graph convolutional network algorithms can slow down the occurrence of over-smoothing.
Keywords:hierarchical structure  graph convolutional networks(GCN)  attention mechanism  artificial intelligence  deep learning
点击此处可从《中国图象图形学报》浏览原始摘要信息
点击此处可从《中国图象图形学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号