首页 | 官方网站   微博 | 高级检索  
     

融合BERT和图注意力网络的多标签文本分类
引用本文:郝超,裘杭萍,孙毅.融合BERT和图注意力网络的多标签文本分类[J].计算机系统应用,2022,31(6):167-174.
作者姓名:郝超  裘杭萍  孙毅
作者单位:陆军工程大学 指挥控制工程学院, 南京 210007
摘    要:多标签文本分类问题是多标签分类的重要分支之一, 现有的方法往往忽视了标签之间的关系, 难以有效利用标签之间存在着的相关性, 从而影响分类效果. 基于此, 本文提出一种融合BERT和图注意力网络的模型HBGA (hybrid BERT and graph attention): 首先, 利用BERT获得输入文本的上下文向量表示, 然后用Bi-LSTM和胶囊网络分别提取文本全局特征和局部特征, 通过特征融合方法构建文本特征向量, 同时, 通过图来建模标签之间的相关性, 用图中的节点表示标签的词嵌入, 通过图注意力网络将这些标签向量映射到一组相互依赖的分类器中, 最后, 将分类器应用到特征提取模块获得的文本特征进行端到端的训练, 综合分类器和特征信息得到最终的预测结果. 在Reuters-21578和AAPD两个数据集上面进行了对比实验, 实验结果表明, 本文模型在多标签文本分类任务上得到了有效的提升.

关 键 词:多标签文本分类  图注意力网络  BERT  深度学习
收稿时间:2021/8/13 0:00:00
修稿时间:2021/9/13 0:00:00

Incorporating BERT and Graph Attention Network for Multi-label Text Classification
HAO Chao,QIU Hang-Ping,SUN Yi.Incorporating BERT and Graph Attention Network for Multi-label Text Classification[J].Computer Systems& Applications,2022,31(6):167-174.
Authors:HAO Chao  QIU Hang-Ping  SUN Yi
Affiliation:Command & Control Engineering College, Army Engineering University of PLA, Nanjing 210007, China
Abstract:The multi-label text classification is one of the important branches of multi-label classification. Existing methods often ignore the relationship between labels, and thus the correlation between labels can hardly be put into effective use, which affects the effects of classification. On this basis, this study proposes a hybrid BERT and graph attention (HBGA) model that fuses BERT and the graph attention network. First, BERT is employed to obtain the context vector representation of the input text, and Bi-LSTM and the capsule network are used to extract the global and local features of the text, respectively. Then, through feature fusion, text feature vectors are constructed. Meanwhile, the correlation between labels is modeled through graphs, and the nodes in graphs are used to represent the word embedding of the labels, and these label vectors are mapped to a set of interdependent classifiers through the graph attention network. Finally, the classifiers are applied to the text features obtained by the feature extraction module for end-to-end training. The classifier and feature information are integrated to obtain the final prediction results. Comparative experiments are performed on datasets Reuters-21578 and AAPD, and the experimental results indicate that the model in this study has been effectively improved on tasks of multi-label text classification.
Keywords:multi-label text classification  graph attention network  BERT  deep learning
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机系统应用》浏览原始摘要信息
点击此处可从《计算机系统应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号