首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
肖琳  陈博理  黄鑫  刘华锋  景丽萍  于剑 《软件学报》2020,31(4):1079-1089
自大数据蓬勃发展以来,多标签分类一直是令人关注的重要问题,在现实生活中有许多实际应用,如文本分类、图像识别、视频注释、多媒体信息检索等.传统的多标签文本分类算法将标签视为没有语义信息的符号,然而,在许多情况下,文本的标签是具有特定语义的,标签的语义信息和文档的内容信息是有对应关系的,为了建立两者之间的联系并加以利用,提出了一种基于标签语义注意力的多标签文本分类(LAbel Semantic Attention Multi-label Classification,简称LASA)方法,依赖于文档的文本和对应的标签,在文档和标签之间共享单词表示.对于文档嵌入,使用双向长短时记忆(bi-directional long short-term memory,简称Bi-LSTM)获取每个单词的隐表示,通过使用标签语义注意力机制获得文档中每个单词的权重,从而考虑到每个单词对当前标签的重要性.另外,标签在语义空间里往往是相互关联的,使用标签的语义信息同时也考虑了标签的相关性.在标准多标签文本分类的数据集上得到的实验结果表明,所提出的方法能够有效地捕获重要的单词,并且其性能优于当前先进的多标签文本分类算法.  相似文献   

2.
目的 手写文本行提取是文档图像处理中的重要基础步骤,对于无约束手写文本图像,文本行都会有不同程度的倾斜、弯曲、交叉、粘连等问题。利用传统的几何分割或聚类的方法往往无法保证文本行边缘的精确分割。针对这些问题提出一种基于文本行回归-聚类联合框架的手写文本行提取方法。方法 首先,采用各向异性高斯滤波器组对图像进行多尺度、多方向分析,利用拖尾效应检测脊形结构提取文本行主体区域,并对其骨架化得到文本行回归模型。然后,以连通域为基本图像单元建立超像素表示,为实现超像素的聚类,建立了像素-超像素-文本行关联层级随机场模型,利用能量函数优化的方法实现超像素的聚类与所属文本行标注。在此基础上,检测出所有的行间粘连字符块,采用基于回归线的k-means聚类算法由回归模型引导粘连字符像素聚类,实现粘连字符分割与所属文本行标注。最后,利用文本行标签开关实现了文本行像素的操控显示与定向提取,而不再需要几何分割。结果 在HIT-MW脱机手写中文文档数据集上进行文本行提取测试,检测率DR为99.83%,识别准确率RA为99.92%。结论 实验表明,提出的文本行回归-聚类联合分析框架相比于传统的分段投影分析、最小生成树聚类、Seam Carving等方法提高了文本行边缘的可控性与分割精度。在高效手写文本行提取的同时,最大程度地避免了相邻文本行的干扰,具有较高的准确率和鲁棒性。  相似文献   

3.
传统的多标签文本分类算法在挖掘标签的关联信息和提取文本与标签之间的判别信息过程中存在不足,由此提出一种基于标签组合的预训练模型与多粒度融合注意力的多标签文本分类算法。通过标签组合的预训练模型训练得到具有标签关联性的文本编码器,使用门控融合策略融合预训练语言模型和词向量得到词嵌入表示,送入预训练编码器中生成基于标签语义的文本表征。通过自注意力和多层空洞卷积增强的标签注意力分别得到全局信息和细粒度语义信息,自适应融合后输入到多层感知机进行多标签预测。在特定威胁识别数据集和两个通用多标签文本分类数据集上的实验结果表明,提出的方法在能够有效捕捉标签与文本之间的关联信息,并在F1值、汉明损失和召回率上均取得了明显提升。  相似文献   

4.
针对标签随着时间变化的动态多标签文本分类问题,提出了一种基于标签语义相似的动态多标签文本分类算法。该算法在训练阶段,首先按照标签固定训练得到一个基于卷积神经网络的多标签文本分类器,然后以该分类器的倒数第二层的输出为文本的特征向量。由于该特征向量是在有标签训练得到的,因而相对于基于字符串即文本内容而言,该特征向量含有标签语义信息。在测试阶段,将测试文档输入训练阶段的多标签文本分类器获取相应的特征向量,然后计算相似性,同时乘以时间衰减因子修正,使得时间越近的文本具有较高的相似性。最后,采用最近邻算法分类。实验结果表明,该算法在处理动态多标签文本分类问题上具有较优的性能。  相似文献   

5.
Online active multi-field learning for efficient email spam filtering   总被引:1,自引:0,他引:1  
Email spam causes a serious waste of time and resources. This paper addresses the email spam filtering problem and proposes an online active multi-field learning approach, which is based on the following ideas: (1) Email spam filtering is an online application, which suggests an online learning idea; (2) Email document has a multi-field text structure, which suggests a multi-field learning idea; and (3) It is costly to obtain a label for a real-world email spam filter, which suggests an active learning idea. The online learner regards the email spam filtering as an incremental supervised binary streaming text classification. The multi-field learner combines multiple results predicted by field classifiers in a novel compound weight schema, and each field classifier calculates the arithmetical average of multiple conditional probabilities calculated from feature strings according to a data structure of string-frequency index. Comparing the current variance of field classifying results with the historical variance, the active learner evaluates the classifying confidence and takes the more uncertain email as the more informative sample for which to request a label. The experimental results show that the proposed approach can achieve the state-of-the-art performance with greatly reduced label requirements and very low space-time costs. The performance of our online active multi-field learning, the standard (1-ROCA)% measurement, even exceeds the full feedback performance of some advanced individual text classification algorithms.  相似文献   

6.
This paper represents a two-phase approach based on semi-Markov conditional random fields model (semi-CRFs) and explores novel feature sets for identifying the entities in text into 5 types: protein, DNA, RNA, cell_line and cell_type. Semi-CRFs put the label to a segment not a single word which is more natural than the other machine learning methods such as conditional random fields model (CRFs). Our approach divides the biomedical named entity recognition task into two sub-tasks: term boundary detection and semantic labeling. At the first phase, term boundary detection sub-task detects the boundary of the entities and classifies the entities into one type C. At the second phase, semantic labeling sub-task labels the entities detected at the first phase the correct entity type. We explore novel feature sets at both phases to improve the performance. To make a comparison, experiments conducted both on CRFs and on semi-CRFs models at each phase. Our experiments carried out on JNLPBA 2004 datasets achieve an F-score of 74.64 % based on semi-CRFs without deep domain knowledge and post-processing algorithms, which outperforms most of the state-of-the-art systems.  相似文献   

7.
金融文本多标签分类算法可以根据用户需求在海量金融资讯中实现信息检索。为进一步提升金融文本标签识别能力,建模金融文本多标签分类中标签之间的相关性,提出基于图深度学习的金融文本多标签分类算法。图深度学习通过深度网络学习局部和全局的图结构特征,可以刻画节点之间的复杂关系。通过建模标签关联实现标签之间的知识迁移,是构造具有强泛化能力算法的关键。所提算法结合标签之间的关联信息,采用基于双向门控循环网络和标签注意力机制得到的新闻文本对应不同标签的特征表示,通过图神经网络学习标签之间的复杂依赖关系。在真实数据集上的实验结果表明,显式建模标签之间的相关性能够极大地增强模型的泛化能力,在尾部标签上的性能提升尤其显著,相比CAML、BIGRU-LWAN和ZACNN算法,该算法在所有标签和尾部标签的宏观F1值上最高提升3.1%和6.9%。  相似文献   

8.
针对传统离线哈希算法训练模型耗时、占用内存大和不易更新模型的问题,以及现实图像集的标签存在大量损失的现象,提出了一种能够平衡标签预测的在线哈希算法(BLPOH)。BLPOH通过标签预测模块生成预测标签,并融合残缺的真实标签,能够有效缓解因标签损失导致的模型性能下降。观察到标签存在分布不平衡现象,提出标签类别相似性平衡算法并应用于标签预测模块,提升标签预测的准确性。将旧数据的信息加入哈希函数的在线更新过程,提升模型对旧数据的兼容性。通过在两个广泛使用的数据集上进行实验,并和一些当前先进的算法进行对比,结果证实了BLPOH的优越性。  相似文献   

9.
李绪夫 《计算机时代》2020,(5):50-53,58
在大数据时代,医药专利数据的有效收集、整理和挖掘分析对医药行业发展愈发重要。当前文本分类神经网络对医药专利标签的分类准确率不够高,为了有效提升专利标签的分类效果,设计了一种基于注意力机制的双向长短时记忆神经网络分类模型。该模型避免了传统循环神经网络的长期依赖问题,并充分利用全局信息,以实现文本信息的权重分布。  相似文献   

10.
基于深度学习的多标签文本分类方法存在两个主要缺陷:缺乏对文本信息多粒度的学习,以及对标签间约束性关系的利用.针对这些问题,提出一种多粒度信息关系增强的多标签文本分类方法.首先,通过联合嵌入的方式将文本与标签嵌入到同一空间,并利用BERT预训练模型获得文本和标签的隐向量特征表示.然后,构建3个多粒度信息关系增强模块:文档级信息浅层标签注意力分类模块、词级信息深层标签注意力分类模块和标签约束性关系匹配辅助模块.其中,前两个模块针对共享特征表示进行多粒度学习:文档级文本信息与标签信息浅层交互学习,以及词级文本信息与标签信息深层交互学习.辅助模块通过学习标签间关系来提升分类性能.最后,所提方法在3个代表性数据集上,与当前主流的多标签文本分类算法进行了比较.结果表明,在主要指标Micro-F1、MacroF1、nDCG@k、P@k上均达到了最佳效果.  相似文献   

11.
In natural language processing, a crucial subsystem in a wide range of applications is a part-of-speech (POS) tagger, which labels (or classifies) unannotated words of natural language with POS labels corresponding to categories such as noun, verb or adjective. Mainstream approaches are generally corpus-based: a POS tagger learns from a corpus of pre-annotated data how to correctly tag unlabeled data. Presented here is a brief state-of-the-art account on POS tagging. POS tagging approaches make use of labeled corpus to train computational trained models. Several typical models of three kings of tagging are introduced in this article: rule-based tagging, statistical approaches and evolution algorithms. The advantages and the pitfalls of each typical tagging are discussed and analyzed. Some rule-based and stochastic methods have been successfully achieved accuracies of 93–96 %, while that of some evolution algorithms are about 96–97 %.  相似文献   

12.
We present a new multiclass algorithm in the bandit framework, where after making a prediction, the learning algorithm receives only partial feedback, i.e., a single bit indicating whether the predicted label is correct or not, rather than the true label. Our algorithm is based on the second-order Perceptron, and uses upper-confidence bounds to trade-off exploration and exploitation, instead of random sampling as performed by most current algorithms. We analyze this algorithm in a partial adversarial setting, where instances are chosen adversarially, while the labels are chosen according to a linear probabilistic model which is also chosen adversarially. We show a regret of $\mathcal{O}(\sqrt{T}\log T)$ , which improves over the current best bounds of $\mathcal{O}(T^{2/3})$ in the fully adversarial setting. We evaluate our algorithm on nine real-world text classification problems and on four vowel recognition tasks, often obtaining state-of-the-art results, even compared with non-bandit online algorithms, especially when label noise is introduced.  相似文献   

13.
Multi-label text classification is an increasingly important field as large amounts of text data are available and extracting relevant information is important in many application contexts. Probabilistic generative models are the basis of a number of popular text mining methods such as Naive Bayes or Latent Dirichlet Allocation. However, Bayesian models for multi-label text classification often are overly complicated to account for label dependencies and skewed label frequencies while at the same time preventing overfitting. To solve this problem we employ the same technique that contributed to the success of deep learning in recent years: greedy layer-wise training. Applying this technique in the supervised setting prevents overfitting and leads to better classification accuracy. The intuition behind this approach is to learn the labels first and subsequently add a more abstract layer to represent dependencies among the labels. This allows using a relatively simple hierarchical topic model which can easily be adapted to the online setting. We show that our method successfully models dependencies online for large-scale multi-label datasets with many labels and improves over the baseline method not modeling dependencies. The same strategy, layer-wise greedy training, also makes the batch variant competitive with existing more complex multi-label topic models.  相似文献   

14.
Conventional connected component analysis (CCA) algorithms render a slow performance in real-time embedded applications due to multiple passes to resolve label equivalences. As this fundamental task becomes crucial for stream processing, single-pass algorithms were introduced to enable a stream-oriented hardware design. However, most single-pass CCA algorithms in the literature inhibit maximum streaming throughput as additional time such as horizontal blanking period is required to resolve label equivalence. This paper proposes a novel single-pass CCA algorithm, using a combination of linked list and run-length-based techniques to label and resolve equivalences as well as extracting the object features in a single raster scan. The proposed algorithm involves a label recycling scheme which attains low memory requirement design. Experimental results show the implementation of the proposed CCA achieves one cycle per pixel throughput and surpasses the most memory-efficient state-of-the-art work up to 25 % reduction in memory usage for \(7680\times 4320\) pixels image.  相似文献   

15.
Web文档清洗系统中HTML解析器的开发   总被引:7,自引:0,他引:7  
对于组建一个面向Web的信息系统来说 ,去除掉脚本、广告链接以及导航链接等无用数据 ,将提高信息存储和检索的效率 ;同时 ,基于语义对Web文档进行合并和分割也会有助于信息的管理 ,这些都是Web文档清洗系统的任务。在Web文档清洗中 ,无论是脱机的规则学习还是联机的文档清洗 ,都需要建立在对Web文档的结构和内容进行分析的基础之上。从HTML解析的一般概念入手 ,结合Web文档清洗系统的需求 ,描述了一个自主开发的HTML解析器的结构 ,并对其组成部分 :词典、词法分析器和语法分析器的设计作了详细的讨论  相似文献   

16.
Currently a consensus on multi-label classification is to exploit label correlations for performance improvement. Many approaches build one classifier for each label based on the one-versus-all strategy, and integrate classifiers by enforcing a regularization term on the global weights to exploit label correlations. However, this strategy might be suboptimal since it may be only part of the global weights that support the assumption. This paper proposes clustered intrinsic label correlations for multi-label classification (CILC), which extends traditional support vector machine to the multi-label setting. The predictive function of each classifier consists of two components: one component is the common information among all labels, and the other component is a label-specific one which highly depends on the corresponding label. The label-specific one representing the intrinsic label correlations is regularized by clustered structure assumption. The appealing features of the proposed method are that it separates the common information and the label-specific information of the labels and utilizes clustered structures among labels represented by the label-specific parts. The practical multi-label classification problems can be directly solved by the proposed CILC method, such as text categorization, image annotation and sentiment analysis. Experiments across five data sets validate the effectiveness of CILC, compared with six well-established multi-label classification algorithms.  相似文献   

17.
This paper proposes an automatic text-independent writer identification framework that integrates an industrial handwriting recognition system, which is used to perform an automatic segmentation of an online handwritten document at the character level. Subsequently, a fuzzy c-means approach is adopted to estimate statistical distributions of character prototypes on an alphabet basis. These distributions model the unique handwriting styles of the writers. The proposed system attained an accuracy of 99.2% when retrieved from a database of 120 writers. The only limitation is that a minimum length of text needs to be present in the document in order for sufficient accuracy to be achieved. We have found that this minimum length of text is about 160 characters or approximately equivalent to 3 lines of text. In addition, the discriminative power of different alphabets on the accuracy is also reported.  相似文献   

18.
When visualizing graphs, it is essential to communicate the meaning of each graph object via text or graphical labels. Automatic placement of labels in a graph is an NP-Hard problem, for which efficient heuristic solutions have been recently developed. In this paper, we describe a general framework for modeling, drawing, editing, and automatic placement of labels respecting user constraints. In addition, we present the interface and the basic engine of the Graph Editor Toolkit - a family of portable graph visualization libraries designed for integration into graphical user interface application programs. This toolkit produces a high quality automated placement of labels in a graph using our framework. A brief survey of automatic label placement algorithms is also presented. Finally we describe extensions to certain existing automatic label placement algorithms, allowing their integration into this visualization tool.  相似文献   

19.
多标签文本分类是指从一个极大的标签集合中为每个文档分配最相关的多个标签。该文提出一种多类型注意力机制下参数自适应模型(Parameter Adaptive Model under Multi-strategy Attention Mechanism,MSAPA)对文档进行建模和分类。MSAPA模型主要包括两部分: 第一部分采用多类型注意力机制分别提取融合自注意力机制的全局关键词特征和局部关键词特征及融合标签注意力机制的全局关键词特征和局部关键词特征;第二部分采用多参数自适应策略为多类型注意力机制动态分配不同的权重,从而学习到更优的文本表示,提升分类的准确率。在AAPD和RCV1两个基准数据集上的大量实验证明了MSAPA模型的优越性。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号