首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many studies on streaming data classification have been based on a paradigm in which a fully labeled stream is available for learning purposes. However, it is often too labor-intensive and time-consuming to manually label a data stream for training. This difficulty may cause conventional supervised learning approaches to be infeasible in many real world applications, such as credit fraud detection, intrusion detection, and rare event prediction. In previous work, Li et al. suggested that these applications be treated as Positive and Unlabeled learning problem, and proposed a learning algorithm, OcVFD, as a solution (Li et al. 2009). Their method requires only a set of positive examples and a set of unlabeled examples which is easily obtainable in a streaming environment, making it widely applicable to real-life applications. Here, we enhance Li et al.’s solution by adding three features: an efficient method to estimate the percentage of positive examples in the training stream, the ability to handle numeric attributes, and the use of more appropriate classification methods at tree leaves. Experimental results on synthetic and real-life datasets show that our enhanced solution (called PUVFDT) has very good classification performance and a strong ability to learn from data streams with only positive and unlabeled examples. Furthermore, our enhanced solution reduces the learning time of OcVFDT by about an order of magnitude. Even with 80 % of the examples in the training data stream unlabeled, PUVFDT can still achieve a competitive classification performance compared with that of VFDTcNB (Gama et al. 2003), a supervised learning algorithm.  相似文献   

2.
In many machine learning settings, labeled examples are difficult to collect while unlabeled data are abundant. Also, for some binary classification problems, positive examples which are elements of the target concept are available. Can these additional data be used to improve accuracy of supervised learning algorithms? We investigate in this paper the design of learning algorithms from positive and unlabeled data only. Many machine learning and data mining algorithms, such as decision tree induction algorithms and naive Bayes algorithms, use examples only to evaluate statistical queries (SQ-like algorithms). Kearns designed the statistical query learning model in order to describe these algorithms. Here, we design an algorithm scheme which transforms any SQ-like algorithm into an algorithm based on positive statistical queries (estimate for probabilities over the set of positive instances) and instance statistical queries (estimate for probabilities over the instance space). We prove that any class learnable in the statistical query learning model is learnable from positive statistical queries and instance statistical queries only if a lower bound on the weight of any target concept f can be estimated in polynomial time. Then, we design a decision tree induction algorithm POSC4.5, based on C4.5, that uses only positive and unlabeled examples and we give experimental results for this algorithm. In the case of imbalanced classes in the sense that one of the two classes (say the positive class) is heavily underrepresented compared to the other class, the learning problem remains open. This problem is challenging because it is encountered in many real-world applications.  相似文献   

3.
The positive unlabeled learning term refers to the binary classification problem in the absence of negative examples. When only positive and unlabeled instances are available, semi-supervised classification algorithms cannot be directly applied, and thus new algorithms are required. One of these positive unlabeled learning algorithms is the positive naive Bayes (PNB), which is an adaptation of the naive Bayes induction algorithm that does not require negative instances. In this work we propose two ways of enhancing this algorithm. On one hand, we have taken the concept behind PNB one step further, proposing a procedure to build more complex Bayesian classifiers in the absence of negative instances. We present a new algorithm (named positive tree augmented naive Bayes, PTAN) to obtain tree augmented naive Bayes models in the positive unlabeled domain. On the other hand, we propose a new Bayesian approach to deal with the a priori probability of the positive class that models the uncertainty over this parameter by means of a Beta distribution. This approach is applied to both PNB and PTAN, resulting in two new algorithms. The four algorithms are empirically compared in positive unlabeled learning problems based on real and synthetic databases. The results obtained in these comparisons suggest that, when the predicting variables are not conditionally independent given the class, the extension of PNB to more complex networks increases the classification performance. They also show that our Bayesian approach to the a priori probability of the positive class can improve the results obtained by PNB and PTAN.  相似文献   

4.
Learning from labeled and unlabeled data using a minimal number of queries   总被引:4,自引:0,他引:4  
The considerable time and expense required for labeling data has prompted the development of algorithms which maximize the classification accuracy for a given amount of labeling effort. On the one hand, the effort has been to develop the so-called "active learning" algorithms which sequentially choose the patterns to be explicitly labeled so as to realize the maximum information gain from each labeling. On the other hand, the effort has been to develop algorithms that can learn from labeled as well as the more abundant unlabeled data. Proposed in this paper is an algorithm that integrates the benefits of active learning with the benefits of learning from labeled and unlabeled data. Our approach is based on reversing the roles of the labeled and unlabeled data. Specifically, we use a Genetic Algorithm (GA) to iteratively refine the class membership of the unlabeled patterns so that the maximum a posteriori (MAP) based predicted labels of the patterns in the labeled dataset are in agreement with the known labels. This reversal of the role of labeled and unlabeled patterns leads to an implicit class assignment of the unlabeled patterns. For active learning, we use a subset of the GA population to construct multiple MAP classifiers. Points in the input space where there is maximal disagreement amongst these classifiers are then selected for explicit labeling. The learning from labeled and unlabeled data and active learning phases are interlaced and together provide accurate classification while minimizing the labeling effort.  相似文献   

5.
6.
In real-world data mining applications, it is often the case that unlabeled instances are abundant, while available labeled instances are very limited. Thus, semi-supervised learning, which attempts to benefit from large amount of unlabeled data together with labeled data, has attracted much attention from researchers. In this paper, we propose a very fast and yet highly effective semi-supervised learning algorithm. We call our proposed algorithm Instance Weighted Naive Bayes (simply IWNB). IWNB firstly trains a naive Bayes using the labeled instances only. And the trained naive Bayes is used to estimate the class membership probabilities of the unlabeled instances. Then, the estimated class membership probabilities are used to label and weight unlabeled instances. At last, a naive Bayes is trained again using both the originally labeled data and the (newly labeled and weighted) unlabeled data. Our experimental results based on a large number of UCI data sets show that IWNB often improves the classification accuracy of original naive Bayes when available labeled data are very limited.  相似文献   

7.
In this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative counterexamples provided in response to the conjectures made by the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative counterexamples and on the time for obtaining them. In particular, we consider the models where (1) a learner gets the least negative counterexample; (2) the size of a negative counterexample must be bounded by the size of the positive data seen so far; (3) a counterexample can be delayed. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative counterexamples) are explored. Several surprising results are obtained. In particular, for Gold's model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely (but finitely) long time (or are only told that their latest conjecture is not a subset of the target language, without any specific negative counterexample). Another result shows that for behaviorally correct learning (where semantic convergence is required from a learner) with negative counterexamples, a learner making just one error in almost all its conjectures has the “ultimate power”: it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative counterexamples provided by a teacher are not enough to compensate for full positive and negative data.  相似文献   

8.
A computational model for learning languages in the limit from full positive data and a bounded number of queries to the teacher (oracle) is introduced and explored. Equivalence, superset, and subset queries are considered (for the latter one we consider also a variant when the learner tests every conjecture, but the number of negative answers is uniformly bounded). If the answer is negative, the teacher may provide a counterexample. We consider several types of counterexamples: arbitrary, least counterexamples, the ones whose size is bounded by the size of positive data seen so far, and no counterexamples. A number of hierarchies based on the number of queries (answers) and types of answers/counterexamples is established. Capabilities of learning with different types of queries are compared. In most cases, one or two queries of one type can sometimes do more than any bounded number of queries of another type. Still, surprisingly, a finite number of subset queries is sufficient to simulate the same number of equivalence queries when behaviourally correct learners do not receive counterexamples and may have unbounded number of errors in almost all conjectures.  相似文献   

9.
10.
PU classification problem (‘P’ stands for positive, ‘U’ stands for unlabeled), which is defined as the training set consists of a collection of positive and unlabeled examples, has become a research hot spot recently. In this paper, we design a new classification algorithm to solve the PU problem: biased twin support vector machine (B-TWSVM). In B-TWSVM, two nonparallel hyperplanes are constructed such that the positive examples can be classified correctly, and the number of unlabeled examples classified as positive is minimized. Moreover, considering that the unlabeled set also contains positive data, different penalty parameters for positive and negative data are allowed in B-TWSVM. Experimental results demonstrate that our method outperforms the state-of-the-art methods in most cases.  相似文献   

11.
One of the challenges in image search is to learn with few labeled examples. Existing solutions mainly focus on leveraging either unlabeled data or query logs to address this issue, but little is known in taking both into account. This work presents a novel learning scheme that exploits both unlabeled data and query logs through a unified Manifold Ranking (MR) framework. In particular, we propose a local scaling technique to facilitate MR by self-tuning the scale parameter, and a soft label propagation strategy to enhance the robustness of MR against erroneous query logs. Further, within the proposed MR framework, a hybrid active learning method is developed, which is effective and efficient to select the informative and representative unlabeled examples, so as to maximally reduce users’ labeling effort. An empirical study shows that the proposed scheme is significantly more effective than the state-of-the-art approaches.  相似文献   

12.
传统分类器的构建需要正样本和负样本两类数据。在遥感影像分类中,常出现这样一类情形:感兴趣的地物只有一种。由于标记样本耗时耗力,未标记样本往往容易获取并且包含有用信息,鉴于此,提出了一种基于正样本和未标记样本的遥感图像分类方法(PUL)。首先,根据正样本固有特征并结合支持向量数据描述(SVDD)从未标记集筛选出可信正负样本,再将其从未标记集中剔除;接着将其带入SVM训练,根据未标记集在分类器中的表现设立阈值,再从未标记集中筛选出相对可靠的正负样本;最后是加权SVM(Weighted SVM)过程,初始正样本及提取出的可靠正负样本权重为1,SVM训练筛选出的样本权重范围0~1。为验证PUL的有效性,在遥感影像进行分类实验,并与单类支持向量机(OC-SVM)、高斯数据描述(GDD)、支持向量数据描述(SVDD)、有偏SVM(Biased SVM)以及多类SVM分类对比,实验结果表明PUL提高了分类效果,优于上述单类分类方法及多类SVM方法。  相似文献   

13.
The class of very simple grammars is known to be polynomial-time identifiable in the limit from positive data. This paper gives an even more general discussion on the efficiency of identification of very simple grammars from positive data, which includes both positive and negative results. In particular, we present an alternative efficient inconsistent learning algorithm for very simple grammars.  相似文献   

14.
This paper is concerned with a sufficient condition under which a concept class is learnable in Gold’s classical model of identification in the limit from positive data. The standard principle of learning algorithms working under this model is called the MINL strategy, which is to conjecture a hypothesis representing a minimal concept among the ones consistent with the given positive data. The minimality of a concept is defined with respect to the set-inclusion relation – the strategy is semantics-based. On the other hand, refinement operators have been developed in the field of learning logic programs, where a learner constructs logic programs as hypotheses consistent with given logical formulae. Refinement operators have syntax-based definitions – they are defined based on inference rules in first-order logic. This paper investigates the relation between the MINL strategy and refinement operators in inductive inference. We first show that if a hypothesis space admits a refinement operator with certain properties, the concept class will be learnable by an algorithm based on the MINL strategy. We then present an additional condition that ensures the learnability of the class of unbounded finite unions of concepts. Furthermore, we show that under certain assumptions a learning algorithm runs in polynomial time.  相似文献   

15.
为了在仅有正例和未标注样本的训练数据集下进行机器学习(PU学习,Positive Unlabeled Learning),提出一种可用于PU学习的平均n依赖决策树(P-AnDT)分类算法。首先在构造决策树时,选取样本的n个属性作为依赖属性,在每个分裂属性上,计算依赖属性和类别属性的共同影响;然后分别选用不同的输入属性作为依赖属性,建立多个有差异的分类器并对结果求平均值,构造集成分类算法。最终通过估计正例在数据集中的比例参数p,使该算法能够在PU学习场景下进行分类。在多组UCI数据集上的实验结果表明,与基于贝叶斯假设的PU学习算法(PNB、PTAN等算法)相比,P-AnDT算法有更好更稳定的分类准确率。  相似文献   

16.
This paper is about extracting knowledge from large sets of videos, with a particular reference to the video-surveillance application domain. We consider an unsupervised framework and address the specific problem of modeling common behaviors from long-term collection of instantaneous observations. Specifically, such data describe dynamic events and may be represented as time series in an appropriate space of features. Starting off from a set of data meaningful of the common events in a given scenario, the pipeline we propose includes a data abstraction level, that allows us to process different data in a homogeneous way, and a behavior modeling level, based on spectral clustering. At the end of the pipeline we obtain a model of the behaviors which are more frequent in the observed scene, represented by a prototypical behavior, which we call a cluster candidate. We report a detailed experimental evaluation referring to both benchmark datasets and on a complex set of data collected in-house. The experiments show that our method compares very favorably with other approaches from the recent literature. In particular the results we obtain prove that our method is able to capture meaningful information and discard noisy one from very heterogeneous datasets with different levels of prior information available.  相似文献   

17.
Tri-training: exploiting unlabeled data using three classifiers   总被引:24,自引:0,他引:24  
In many practical data mining applications, such as Web page classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. In this paper, a new co-training style semi-supervised learning algorithm, named tri-training, is proposed. This algorithm generates three classifiers from the original labeled example set. These classifiers are then refined using unlabeled examples in the tri-training process. In detail, in each round of tri-training, an unlabeled example is labeled for a classifier if the other two classifiers agree on the labeling, under certain conditions. Since tri-training neither requires the instance space to be described with sufficient and redundant views nor does it put any constraints on the supervised learning algorithm, its applicability is broader than that of previous co-training style algorithms. Experiments on UCI data sets and application to the Web page classification task indicate that tri-training can effectively exploit unlabeled data to enhance the learning performance.  相似文献   

18.
Automatic text classification is one of the most important tools in Information Retrieval. This paper presents a novel text classifier using positive and unlabeled examples. The primary challenge of this problem as compared with the classical text classification problem is that no labeled negative documents are available in the training example set. Firstly, we identify many more reliable negative documents by an improved 1-DNF algorithm with a very low error rate. Secondly, we build a set of classifiers by iteratively applying the SVM algorithm on a training data set, which is augmented during iteration. Thirdly, different from previous PU-oriented text classification works, we adopt the weighted vote of all classifiers generated in the iteration steps to construct the final classifier instead of choosing one of the classifiers as the final classifier. Finally, we discuss an approach to evaluate the weighted vote of all classifiers generated in the iteration steps to construct the final classifier based on PSO (Particle Swarm Optimization), which can discover the best combination of the weights. In addition, we built a focused crawler based on link-contexts guided by different classifiers to evaluate our method. Several comprehensive experiments have been conducted using the Reuters data set and thousands of web pages. Experimental results show that our method increases the performance (F1-measure) compared with PEBL, and a focused web crawler guided by our PSO-based classifier outperforms other several classifiers both in harvest rate and target recall.  相似文献   

19.
Previous partially supervised classification methods can partition unlabeled data into positive examples and negative examples for a given class by learning from positive labeled examples and unlabeled examples, but they cannot further group the negative examples into meaningful clusters even if there are many different classes in the negative examples. Here we proposed an automatic method to obtain a natural partitioning of mixed data (labeled data + unlabeled data) by maximizing a stability criterion defined on classification results from an extended label propagation algorithm over all the possible values of model order (or the number of classes) in mixed data. Our experimental results on benchmark corpora for word sense disambiguation task indicate that this model order identification algorithm with the extended label propagation algorithm as the base classifier outperforms SVM, a one-class partially supervised classification algorithm, and the model order identification algorithm with semi-supervised k-means clustering as the base classifier when labeled data is incomplete.  相似文献   

20.
“What does it mean, to see? The plain man’s answer would be, to know what is where by looking.” This famous quote by David Marr (Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, Freeman, New York, 1982) sums up the holy grail of vision: discovering what is present in the world, and where it is, from unlabeled images. In this paper we tackle this challenging problem by proposing a generative model of object formation and describe an efficient algorithm to automatically learn the parameters of the model from a collection of unlabeled images. Our algorithm discovers the objects and their spatial extents by clustering together images containing similar foregrounds. Our approach simultaneously solves for the image clusters, the foreground appearance models and the spatial regions containing the objects by optimizing a single likelihood function defined over the entire image collection. We describe two methods for efficient foreground localization: the first method does not require any bottom-up image segmentation and discovers the foreground region as a contiguous rectangular bounding box. The second method expresses the foreground as a collection of super-pixels generated through a bottom-up segmentation of the image. However, unlike previous methods, objects are not assumed to be encapsulated by a single segment. Evaluation on standard benchmarks and comparison with prior methods demonstrate that our approach achieves state-of-the-art results on the problem of unsupervised foreground localization and clustering.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号