首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 531 毫秒
1.
Text mining, intelligent text analysis, text data mining and knowledge-discovery in text are generally used aliases to the process of extracting relevant and non-trivial information from text. Some crucial issues arise when trying to solve this problem, such as document representation and deficit of labeled data. This paper addresses these problems by introducing information from unlabeled documents in the training set, using the support vector machine (SVM) separating margin as the differentiating factor. Besides studying the influence of several pre-processing methods and concluding on their relative significance, we also evaluate the benefits of introducing background knowledge in a SVM text classifier. We further evaluate the possibility of actively learning and propose a method for successfully combining background knowledge and active learning. Experimental results show that the proposed techniques, when used alone or combined, present a considerable improvement in classification performance, even when small labeled training sets are available.  相似文献   

2.
Nearest neighbor editing aided by unlabeled data   总被引:1,自引:0,他引:1  
This paper proposes a novel method for nearest neighbor editing. Nearest neighbor editing aims to increase the classifier’s generalization ability by removing noisy instances from the training set. Traditionally nearest neighbor editing edits (removes/retains) each instance by the voting of the instances in the training set (labeled instances). However, motivated by semi-supervised learning, we propose a novel editing methodology which edits each training instance by the voting of all the available instances (both labeled and unlabeled instances). We expect that the editing performance could be boosted by appropriately using unlabeled data. Our idea relies on the fact that in many applications, in addition to the training instances, many unlabeled instances are also available since they do not need human annotation effort. Three popular data editing methods, including edited nearest neighbor, repeated edited nearest neighbor and All k-NN are adopted to verify our idea. They are tested on a set of UCI data sets. Experimental results indicate that all the three editing methods can achieve improved performance with the aid of unlabeled data. Moreover, the improvement is more remarkable when the ratio of training data to unlabeled data is small.  相似文献   

3.
标签传递算法是一种半监督分类方法,由于该算法存在要求数据分类结果符合流行假设、数据维数较高时计算复杂度高等问题,在文本分类中效果较差。针对这些问题,经过对LDA主题模型和标签传递算法原理及复杂度的分析,将两者结合,提出一种基于LDA主题模型的标签传递算法LPLDA。该算法用LDA主题模型中的主题表示文本数据,一方面使用LDA主题模型表示文本保证分类结果符合流行假设,另一方面有效减少标签传递算法相似度计算时间。经过实验证明,该算法在标记数据少于待测样本时,分类效果优于传统的有监督分类方法。  相似文献   

4.
针对现有文本分类方法在即时性文本信息上面临的挑战,考虑到即时性文本信息具有已标注数据规模小的特点,为了提高半监督学习的分类性能,该文提出一种基于优化样本分布抽样集成学习的半监督文本分类方法。首先,通过运用一种新的样本抽样的优化策略,获取多个新的子分类器训练集,以增加训练集之间的多样性和减少噪声的扩散范围,从而提高分类器的总体泛化能力;然后,采用基于置信度相乘的投票机制对预测结果进行集成,对未标注数据进行标注;最后,选取适量的数据来更新训练模型。实验结果表明,该方法在长文本和短文本上都取得了优于研究进展方法的分类性能。  相似文献   

5.
一种利用近邻和信息熵的主动文本标注方法   总被引:1,自引:0,他引:1  
由于大规模标注文本数据费时费力,利用少量标注样本和大量未标注样本的半监督文本分类发展迅速.在半监督文本分类中,少量标注样本主要用来初始化分类模型,其合理性将影响最终分类模型的性能.为了使标注样本尽可能吻合原始数据的分布,提出一种避开选择已标注样本的K近邻来抽取下一组候选标注样本的方法,使得分布在不同区域的样本有更多的标注机会.在此基础上,为了获得更多的类别信息,在候选标注样本中选择信息熵最大的样本作为最终的标注样本.真实文本数据上的实验表明了提出方法的有效性.  相似文献   

6.
In computer aided medical system, many practical classification applications are confronted to the massive multiplication of collection and storage of data, this is especially the case in areas such as the prediction of medical test efficiency, the classification of tumors and the detection of cancers. Data with known class labels (labeled data) can be limited but unlabeled data (with unknown class labels) are more readily available. Semi-supervised learning deals with methods for exploiting the unlabeled data in addition to the labeled data to improve performance on the classification task. In this paper, we consider the problem of using a large amount of unlabeled data to improve the efficiency of feature selection in large dimensional datasets, when only a small set of labeled examples is available. We propose a new semi-supervised feature evaluation method called Optimized co-Forest for Feature Selection (OFFS) that combines ideas from co-forest and the embedded principle of selecting in Random Forest based by the permutation of out-of-bag set. We provide empirical results on several medical and biological benchmark datasets, indicating an overall significant improvement of OFFS compared to four other feature selection approaches using filter, wrapper and embedded manner in semi-supervised learning. Our method proves its ability and effectiveness to select and measure importance to improve the performance of the hypothesis learned with a small amount of labeled samples by exploiting unlabeled samples.  相似文献   

7.
Semi-supervised model-based document clustering: A comparative study   总被引:4,自引:0,他引:4  
Semi-supervised learning has become an attractive methodology for improving classification models and is often viewed as using unlabeled data to aid supervised learning. However, it can also be viewed as using labeled data to help clustering, namely, semi-supervised clustering. Viewing semi-supervised learning from a clustering angle is useful in practical situations when the set of labels available in labeled data are not complete, i.e., unlabeled data contain new classes that are not present in labeled data. This paper analyzes several multinomial model-based semi-supervised document clustering methods under a principled model-based clustering framework. The framework naturally leads to a deterministic annealing extension of existing semi-supervised clustering approaches. We compare three (slightly) different semi-supervised approaches for clustering documents: Seeded damnl, Constrained damnl, and Feedback-based damnl, where damnl stands for multinomial model-based deterministic annealing algorithm. The first two are extensions of the seeded k-means and constrained k-means algorithms studied by Basu et al. (2002); the last one is motivated by Cohn et al. (2003). Through empirical experiments on text datasets, we show that: (a) deterministic annealing can often significantly improve the performance of semi-supervised clustering; (b) the constrained approach is the best when available labels are complete whereas the feedback-based approach excels when available labels are incomplete. Editor: Andrew Moore  相似文献   

8.
一种基于紧密度的半监督文本分类方法   总被引:2,自引:0,他引:2  
自动的文本分类已经成为一个重要的研究课题。在实际的应用情况下,很多训练语料都只有一个数目有限的正例集合,同时语料中的正例和未标注文档在数量上的分布通常也是不均衡的。因此这种文本分类任务有着不同于传统的文本分类任务的特点,传统的文本分类器如果直接应用到这类问题上,也难以取得令人满意的效果。因此,本文提出了一种基于紧密度衡量的方法来解决这一类问题。由于没有标注出来的负例文档,所以,本文先提取出一些可信的负例,然后再根据紧密度衡量对提取出的负例集合进行扩展,进而得到包含正负例的训练集合,从而提高分类器的性能。该方法不需要借助特别的外部知识库来对特征提取,因此能够比较好的应用到各个不同的分类环境中。在TREC’05(国际文本检索会议)的基因项目的文本分类任务语料上的实验表明,该算法在解决半监督文本分类问题中取得了优异的成绩。  相似文献   

9.
In real-world data mining applications, it is often the case that unlabeled instances are abundant, while available labeled instances are very limited. Thus, semi-supervised learning, which attempts to benefit from large amount of unlabeled data together with labeled data, has attracted much attention from researchers. In this paper, we propose a very fast and yet highly effective semi-supervised learning algorithm. We call our proposed algorithm Instance Weighted Naive Bayes (simply IWNB). IWNB firstly trains a naive Bayes using the labeled instances only. And the trained naive Bayes is used to estimate the class membership probabilities of the unlabeled instances. Then, the estimated class membership probabilities are used to label and weight unlabeled instances. At last, a naive Bayes is trained again using both the originally labeled data and the (newly labeled and weighted) unlabeled data. Our experimental results based on a large number of UCI data sets show that IWNB often improves the classification accuracy of original naive Bayes when available labeled data are very limited.  相似文献   

10.
Multiple instance learning attempts to learn from a training set consists of labeled bags each containing many unlabeled instances. In previous works, most existing algorithms mainly pay attention to the ‘most positive’ instance in each positive bag, but ignore the other instances. For utilizing these unlabeled instances in positive bags, we present a new multiple instance learning algorithm via semi-supervised laplacian twin support vector machines (called Miss-LTSVM). In Miss-LTSVM, all instances in positive bags are used in the manifold regularization terms for improving the performance of classifier. For verifying the effectiveness of the presented method, a series of comparative experiments are performed on seven multiple instance data sets. Experimental results show that the proposed method has better classification accuracy than other methods in most cases.  相似文献   

11.
基于集成学习的半监督情感分类方法研究   总被引:1,自引:0,他引:1  
情感分类旨在对文本所表达的情感色彩类别进行分类的任务。该文研究基于半监督学习的情感分类方法,即在很少规模的标注样本的基础上,借助非标注样本提高情感分类性能。为了提高半监督学习能力,该文提出了一种基于一致性标签的集成方法,用于融合两种主流的半监督情感分类方法:基于随机特征子空间的协同训练方法和标签传播方法。首先,使用这两种半监督学习方法训练出的分类器对未标注样本进行标注;其次,选取出标注一致的未标注样本;最后,使用这些挑选出的样本更新训练模型。实验结果表明,该方法能够有效降低对未标注样本的误标注率,从而获得比任一种半监督学习方法更好的分类效果。  相似文献   

12.
Increasing attention is being paid to the classification of ground objects using hyperspectral spectrometer images. A key challenge of most hyperspectral classifications is the cost of training samples. It is difficult to acquire enough effective marked label sets using classification model frameworks. In this paper, a semi-supervised classification framework of hyperspectral images is proposed to better solve problems associated with hyperspectral image classification. The proposed method is based on an iteration process, making full use of the small amount of labeled data in a sample set. In addition, a new unlabeled data trainer in the self-training semi-supervised learning framework is explored and implemented by estimating the fusion evidence entropy of unlabeled samples using the minimum trust evaluation and maximum uncertainty. Finally, we employ different machine learning classification methods to compare the classification performance of different hyperspectral images. The experimental results indicate that the proposed approach outperforms traditional state-of-the-art methods in terms of low classification errors and better classification charts using few labeled samples.  相似文献   

13.
波段选择是数据降维的有效手段,但有限的标记样本影响了监督波段选择的性能。提出一种利用图Laplacian和自训练策略实现半监督波段选择的方法。该方法首先定义基于图的半监督特征评分准则以产生初始波段子集,接着在该子集基础上进行分类,采用自训练策略将部分可信度较高的非标记样本扩展至标记样本集合,再用特征评分准则对波段子集进行更新。重复该过程,获得最终波段子集。高光谱波段选择与分类实验比较了多种非监督、监督和半监督方法,实验结果表明所提算法能选择出更好的波段子集。  相似文献   

14.
When only a small number of labeled samples are available, supervised dimensionality reduction methods tend to perform poorly because of overfitting. In such cases, unlabeled samples could be useful in improving the performance. In this paper, we propose a semi-supervised dimensionality reduction method which preserves the global structure of unlabeled samples in addition to separating labeled samples in different classes from each other. The proposed method, which we call SEmi-supervised Local Fisher discriminant analysis (SELF), has an analytic form of the globally optimal solution and it can be computed based on eigen-decomposition. We show the usefulness of SELF through experiments with benchmark and real-world document classification datasets.  相似文献   

15.
Developing methods for designing good classifiers from labeled samples whose distribution is different from that of test samples is an important and challenging research issue in the fields of machine learning and its application. This paper focuses on designing semi-supervised classifiers with a high generalization ability by using unlabeled samples drawn by the same distribution as the test samples and presents a semi-supervised learning method based on a hybrid discriminative and generative model. Although JESS-CM is one of the most successful semi-supervised classifier design frameworks based on a hybrid approach, it has an overfitting problem in the task setting that we consider in this paper. We propose an objective function that utilizes both labeled and unlabeled samples for the discriminative training of hybrid classifiers and then expect the objective function to mitigate the overfitting problem. We show the effect of the objective function by theoretical analysis and empirical evaluation. Our experimental results for text classification using four typical benchmark test collections confirmed that with our task setting in most cases, the proposed method outperformed the JESS-CM framework. We also confirmed experimentally that the proposed method was useful for obtaining better performance when classifying data samples into either known or unknown classes, which were included in given labeled samples or not, respectively.  相似文献   

16.
针对多标签学习中实例标签的缺失补全和预测问题,本文提出一种基于正则化的半监督弱标签分类方法(简称SWCMR),方法同时兼顾实例相似性和标签相关性.SWCMR首先根据标签相关性对弱标签实例的缺失标签进行初步预估,然后利用弱标签实例和无标签实例构造邻域图,从实例相似性和标签相关性角度构建基于平滑性假设的正则化项,接下来利用预估后的弱标签实例结合无标签实例训练半监督弱标签分类模型.在多种公共多标签数据集上的实验结果表明,SWCMR提高了分类性能,尤其是标签信息较少时,分类效果提升更显著.  相似文献   

17.
社交网络平台产生海量的短文本数据流,具有快速、海量、概念漂移、文本长度短小、类标签大量缺失等特点.为此,文中提出基于向量表示和标签传播的半监督短文本数据流分类算法,可对仅含少量有标记数据的数据集进行有效分类.同时,为了适应概念漂移,提出基于聚类簇的概念漂移检测算法.在实际短文本数据流上的实验表明,相比半监督分类算法和半监督数据流分类算法,文中算法不仅提高分类精度和宏平均,还能快速适应数据流中的概念漂移.  相似文献   

18.
Traditional supervised classifiers use only labeled data (features/label pairs) as the training set, while the unlabeled data is used as the testing set. In practice, it is often the case that the labeled data is hard to obtain and the unlabeled data contains the instances that belong to the predefined class but not the labeled data categories. This problem has been widely studied in recent years and the semi-supervised PU learning is an efficient solution to learn from positive and unlabeled examples. Among all the semi-supervised PU learning methods, it is hard to choose just one approach to fit all unlabeled data distribution. In this paper, a new framework is designed to integrate different semi-supervised PU learning algorithms in order to take advantage of existing methods. In essence, we propose an automatic KL-divergence learning method by utilizing the knowledge of unlabeled data distribution. Meanwhile, the experimental results show that (1) data distribution information is very helpful for the semi-supervised PU learning method; (2) the proposed framework can achieve higher precision when compared with the state-of-the-art method.  相似文献   

19.
This paper proposes a semi-supervised learning method for semantic relation extraction between named entities. Given a small amount of labeled data, it benefits much from a large amount of unlabeled data by first bootstrapping a moderate number of weighted support vectors from all the available data through a co-training procedure on top of support vector machines (SVM) with feature projection and then applying a label propagation (LP) algorithm via the bootstrapped support vectors and the remaining hard unlabeled instances after SVM bootstrapping to classify unseen instances. Evaluation on the ACE RDC corpora shows that our method can integrate the advantages of both SVM bootstrapping and label propagation. It shows that our LP algorithm via the bootstrapped support vectors and hard unlabeled instances significantly outperforms the normal LP algorithm via all the available data without SVM bootstrapping. Moreover, our LP algorithm can significantly reduce the computational burden, especially when a large amount of labeled and unlabeled data is taken into consideration.  相似文献   

20.
Vision-based defect classification is an important technology to control the quality of product in manufacturing system. As it is very hard to obtain enough labeled samples for model training in the real-world production, the semi-supervised learning which learns from both labeled and unlabeled samples is more suitable for this task. However, the intra-class variations and the inter-class similarities of surface defect, named as the poor class separation, may cause the semi-supervised methods to perform poorly with small labeled samples. While graph-based methods, such as graph convolution network (GCN), can solve the problem well. Therefore, this paper proposes a new graph-based semi-supervised method, named as multiple micrographs graph convolutional network (MMGCN), for surface defect classification. Firstly, MMGCN performs graph convolution by constructing multiple micrographs instead of a large graph, and labels unlabeled samples by propagating label information from labeled samples to unlabeled samples in the micrographs to obtain multiple labels. Weighting the labels can obtain the final label, which can solve the limitations of computation complexity and practicality of original GCN. Secondly, MMGCN divides unlabeled dataset into multiple batches and sets an accuracy threshold. When the model accuracy reaches the threshold, the unlabeled datasets are labeled in batches. A famous case has been used to evaluate the performance of the proposed method. The experimental results demonstrate that the proposed MMGCN can achieve better computation complexity and practicality than GCN. And for accuracy, MMGCN can also obtain the best performance and the best class separation in the comparison with other semi-supervised surface defect classification methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号