首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
基于J.Kivinen和M.K.Warmuth提出的一种基于正则化的在线学习模式,提出基于bregman距离和等式约束正则化弱分类器权值更新模式,实现了AdaBoostS,AdaBoostIE,AdaBoostRE,AdaBoostDE和AdaBoostE五种弱分类器权更新算法。在实验部分,利用实际数据对提出的五种算法与Real AdaBoost、Gentle AdaBoost和Modest AdaBoost算法作了比较。  相似文献   

2.

In this paper, we propose the problem of online cost-sensitive classifier adaptation and the first algorithm to solve it. We assume that we have a base classifier for a cost-sensitive classification problem, but it is trained with respect to a cost setting different to the desired one. Moreover, we also have some training data samples streaming to the algorithm one by one. The problem is to adapt the given base classifier to the desired cost setting using the steaming training samples online. To solve this problem, we propose to learn a new classifier by adding an adaptation function to the base classifier, and update the adaptation function parameter according to the streaming data samples. Given an input data sample and the cost of misclassifying it, we update the adaptation function parameter by minimizing cost-weighted hinge loss and respecting previous learned parameter simultaneously. The proposed algorithm is compared to both online and off-line cost-sensitive algorithms on two cost-sensitive classification problems, and the experiments show that it not only outperforms them on classification performances, but also requires significantly less running time.

  相似文献   

3.
Training neural networks (NNs) is a complex task of great importance in the supervised learning area. However, performance of the NNs is mostly dependent on the success of training process, and therefore the training algorithm. This paper addresses the application of harmony search algorithms for the supervised training of feed-forward (FF) type NNs, which are frequently used for classification problems. In this paper, five different variants of harmony search algorithm are studied by giving special attention to Self-adaptive Global Best Harmony Search (SGHS) algorithm. A structure suitable to data representation of NNs is adapted to SGHS algorithm. The technique is empirically tested and verified by training NNs on six benchmark classification problems and a real-world problem. Among these benchmark problems two of them have binary classes and remaining four are n-ary classification problems. Real-world problem is related to the classification of most frequently encountered quality defect in a major textile company in Turkey. Overall training time, sum of squared errors, training and testing accuracies of SGHS algorithm, is compared with the other harmony search algorithms and the most widely used standard back-propagation (BP) algorithm. The experiments presented that the SGHS algorithm lends itself very well to the training of NNs and also highly competitive with the compared methods in terms of classification accuracy.  相似文献   

4.
5.
Flexible latent variable models for multi-task learning   总被引:1,自引:1,他引:0  
Given multiple prediction problems such as regression or classification, we are interested in a joint inference framework that can effectively share information between tasks to improve the prediction accuracy, especially when the number of training examples per problem is small. In this paper we propose a probabilistic framework which can support a set of latent variable models for different multi-task learning scenarios. We show that the framework is a generalization of standard learning methods for single prediction problems and it can effectively model the shared structure among different prediction tasks. Furthermore, we present efficient algorithms for the empirical Bayes method as well as point estimation. Our experiments on both simulated datasets and real world classification datasets show the effectiveness of the proposed models in two evaluation settings: a standard multi-task learning setting and a transfer learning setting.  相似文献   

6.
Computing Optimal Attribute Weight Settings for Nearest Neighbor Algorithms   总被引:2,自引:0,他引:2  
Nearest neighbor (NN) learning algorithms, examples of the lazy learning paradigm, rely on a distance function to measure the similarity of testing examples with the stored training examples. Since certain attributes are more discriminative, while others can be less or totally irrelevant, attributes should be weighed differently in the distance function. Most previous studies on weight setting for NN learning algorithms are empirical. In this paper we describe our attempt on deciding theoretically optimal weights that minimize the predictive error for NN algorithms. Assuming a uniform distribution of examples in a 2-d continuous space, we first derive the average predictive error introduced by a linear classification boundary, and then determine the optimal weight setting for any polygonal classification region. Our theoretical results of optimal attribute weights can serve as a baseline or lower bound for comparing other empirical weight setting methods.  相似文献   

7.
一般的在线学习算法对不平衡数据流的分类识别会遇到较大困难,特别是当数据流发生概念漂移时,对其进行分类会变得更困难.文中提出面向不平衡数据流的自适应加权在线超限学习机算法,自动调整实时到达的训练样本的惩罚参数,达到在线学习不平衡数据流的目的.文中算法可以适用于不同偏斜程度的静态数据流的在线学习和发生概念漂移时数据流的在线学习.理论分析和在多个真实数据流上的实验表明文中算法的正确性和有效性.  相似文献   

8.
Active learning is understood as any form of learning in which the learning algorithm has some control over the input samples due to a specific sample selection process based on which it builds up the model. In this paper, we propose a novel active learning strategy for data-driven classifiers, which is based on unsupervised criterion during off-line training phase, followed by a supervised certainty-based criterion during incremental on-line training. In this sense, we call the new strategy hybrid active learning. Sample selection in the first phase is conducted from scratch (i.e. no initial labels/learners are needed) based on purely unsupervised criteria obtained from clusters: samples lying near cluster centers and near the borders of clusters are expected to represent the most informative ones regarding the distribution characteristics of the classes. In the second phase, the task is to update already trained classifiers during on-line mode with the most important samples in order to dynamically guide the classifier to more predictive power. Both strategies are essential for reducing the annotation and supervision effort of operators in off-line and on-line classification systems, as operators only have to label an exquisite subset of the off-line training data resp. give feedback only on specific occasions during on-line phase. The new active learning strategy is evaluated based on real-world data sets from UCI repository and collected at on-line quality control systems. The results show that an active learning based selection of training samples (1) does not weaken the classification accuracies compared to when using all samples in the training process and (2) can out-perform classifiers which are built on randomly selected data samples.  相似文献   

9.
For a recurrent neural network (RNN), its transient response is a critical issue, especially for real-time signal processing applications. The conventional RNN training algorithms, such as backpropagation through time (BPTT) and real-time recurrent learning (RTRL), have not adequately addressed this problem because they suffer from low convergence speed. While increasing the learning rate may help to improve the performance of the RNN, it can result in unstable training in terms of weight divergence. Therefore, an optimal tradeoff between RNN training speed and weight convergence is desired. In this paper, a robust adaptive gradient-descent (RAGD) training algorithm of RNN is developed based on a novel RNN hybrid training concept. It switches the training patterns between standard real-time online backpropagation (BP) and RTRL according to the derived convergence and stability conditions. The weight convergence and $L_2$-stability of the algorithm are derived via the conic sector theorem. The optimized adaptive learning maximizes the training speed of the RNN for each weight update without violating the stability and convergence criteria. Computer simulations are carried out to demonstrate the applicability of the theoretical results.   相似文献   

10.
The traditional setting of supervised learning requires a large amount of labeled training examples in order to achieve good generalization. However, in many practical applications, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semisupervised learning has attracted much attention. Previous research on semisupervised learning mainly focuses on semisupervised classification. Although regression is almost as important as classification, semisupervised regression is largely understudied. In particular, although cotraining is a main paradigm in semisupervised learning, few works has been devoted to cotraining-style semisupervised regression algorithms. In this paper, a cotraining-style semisupervised regression algorithm, that is, COREG, is proposed. This algorithm uses two regressors, each labels the unlabeled data for the other regressor, where the confidence in labeling an unlabeled example is estimated through the amount of reduction in mean squared error over the labeled neighborhood of that example. Analysis and experiments show that COREG can effectively exploit unlabeled data to improve regression estimates.  相似文献   

11.
Perceptron-based learning algorithms   总被引:1,自引:0,他引:1  
A key task for connectionist research is the development and analysis of learning algorithms. An examination is made of several supervised learning algorithms for single-cell and network models. The heart of these algorithms is the pocket algorithm, a modification of perceptron learning that makes perceptron learning well-behaved with nonseparable training data, even if the data are noisy and contradictory. Features of these algorithms include speed algorithms fast enough to handle large sets of training data; network scaling properties, i.e. network methods scale up almost as well as single-cell models when the number of inputs is increased; analytic tractability, i.e. upper bounds on classification error are derivable; online learning, i.e. some variants can learn continually, without referring to previous data; and winner-take-all groups or choice groups, i.e. algorithms can be adapted to select one out of a number of possible classifications. These learning algorithms are suitable for applications in machine learning, pattern recognition, and connectionist expert systems.  相似文献   

12.
Gentile  Claudio 《Machine Learning》2003,53(3):265-299
We consider two on-line learning frameworks: binary classification through linear threshold functions and linear regression. We study a family of on-line algorithms, called p-norm algorithms, introduced by Grove, Littlestone and Schuurmans in the context of deterministic binary classification. We show how to adapt these algorithms for use in the regression setting, and prove worst-case bounds on the square loss, using a technique from Kivinen and Warmuth. As pointed out by Grove, et al., these algorithms can be made to approach a version of the classification algorithm Winnow as p goes to infinity; similarly they can be made to approach the corresponding regression algorithm EG in the limit. Winnow and EG are notable for having loss bounds that grow only logarithmically in the dimension of the instance space. Here we describe another way to use the p-norm algorithms to achieve this logarithmic behavior. With the way to use them that we propose, it is less critical than with Winnow and EG to retune the parameters of the algorithm as the learning task changes. Since the correct setting of the parameters depends on characteristics of the learning task that are not typically known a priori by the learner, this gives the p-norm algorithms a desireable robustness. Our elaborations yield various new loss bounds in these on-line settings. Some of these bounds improve or generalize known results. Others are incomparable.  相似文献   

13.
毛文涛  田杨阳  王金婉  何玲 《控制与决策》2016,31(12):2147-2154
针对现有算法对贯序到达的密度型不均衡数据分类效果不佳的缺陷, 提出一种基于粒度划分的在线贯序极限学习机算法. 离线阶段,根据数据分布特性对多类样本进行粒度划分, 用粒心代替原有样本, 建立初始模型; 在线阶段, 根据更新后的分布特性对多类边界数据进行二次粒度划分, 替换原有边界数据, 并动态更新网络权值. 理论分析证明该算法存在信息损失上界. 实验结果表明, 该算法能有效提高贯序不均衡数据上的整体泛化性能和分类效率.  相似文献   

14.
Most of the existing sequential learning methods for class imbalance learn data in chunks. In this paper, we propose a weighted online sequential extreme learning machine (WOS-ELM) algorithm for class imbalance learning (CIL). WOS-ELM is a general online learning method that alleviates the class imbalance problem in both chunk-by-chunk and one-by-one learning. One of the new features of WOS-ELM is that an appropriate weight setting for CIL is selected in a computationally efficient manner. In one-by-one learning of WOS-ELM, a new sample can update the classification model without waiting for a chunk to be completed. Extensive empirical evaluations on 15 imbalanced datasets show that WOS-ELM obtains comparable or better classification performance than competing methods. The computational time of WOS-ELM is also found to be lower than that of the competing CIL methods.  相似文献   

15.
The proliferation of networked data in various disciplines motivates a surge of research interests on network or graph mining. Among them, node classification is a typical learning task that focuses on exploiting the node interactions to infer the missing labels of unlabeled nodes in the network. A vast majority of existing node classification algorithms overwhelmingly focus on static networks and they assume the whole network structure is readily available before performing learning algorithms. However, it is not the case in many real-world scenarios where new nodes and new links are continuously being added in the network. Considering the streaming nature of networks, we study how to perform online node classification on this kind of streaming networks (a.k.a. online learning on streaming networks). As the existence of noisy links may negatively affect the node classification performance, we first present an online network embedding algorithm to alleviate this problem by obtaining the embedding representation of new nodes on the fly. Then we feed the learned embedding representation into a novel online soft margin kernel learning algorithm to predict the node labels in a sequential manner. Theoretical analysis is presented to show the superiority of the proposed framework of online learning on streaming networks (OLSN). Extensive experiments on real-world networks further demonstrate the effectiveness and efficiency of the proposed OLSN framework.  相似文献   

16.
In this paper, we propose two methods of adaptive actor-critic architectures to solve control problems of nonlinear systems. One method uses two actual states at time k and time k+1 to update the learning algorithm. The basic idea of this method is that the agent can directly take some knowledge from the environment to improve its knowledge. The other method only uses the state at time k to update the algorithm. This method is called, learning from prediction (or simulated experience). Both methods include one or two predictive models, which are assumed to be applied to construct predictive states and a model-based actor (MBA). Here, the MBA as an actor can be viewed as a network where the connection weights are the elements of the feedback gain matrix. In the critic part, two value-functions are realized as a pure static mapping, which can be reduced to a nonlinear current estimator by using the radial basis function neural networks (RBFNNs). Simulation results obtained for a dynamical model of nonholonomic mobile robots with two independent driving wheels are presented. They show the effectiveness of the proposed approaches for the trajectory tracking control problem.  相似文献   

17.
现有概念漂移处理算法在检测到概念漂移发生后,通常需要在新到概念上重新训练分类器,同时“遗忘”以往训练的分类器。在概念漂移发生初期,由于能够获取到的属于新到概念的样本较少,导致新建的分类器在短时间内无法得到充分训练,分类性能通常较差。进一步,现有的基于在线迁移学习的数据流分类算法仅能使用单个分类器的知识辅助新到概念进行学习,在历史概念与新到概念相似性较差时,分类模型的分类准确率不理想。针对以上问题,文中提出一种能够利用多个历史分类器知识的数据流分类算法——CMOL。CMOL算法采取分类器权重动态调节机制,根据分类器的权重对分类器池进行更新,使得分类器池能够尽可能地包含更多的概念。实验表明,相较于其他相关算法,CMOL算法能够在概念漂移发生时更快地适应新到概念,显示出更高的分类准确率。  相似文献   

18.
A modified counter-propagation (CP) algorithm with supervised learning vector quantizer (LVQ) and dynamic node allocation has been developed for rapid classification of molecular sequences. The molecular sequences were encoded into neural input vectors using an n–gram hashing method for word extraction and a singular value decomposition (SVD) method for vector compression. The neural networks used were three-layered, forward-only CP networks that performed nearest neighbor classification. Several factors affecting the CP performance were evaluated, including weight initialization, Kohonen layer dimensioning, winner selection and weight update mechanisms. The performance of the modified CP network was compared with the back-propagation (BP) neural network and the k–nearest neighbor method. The major advantages of the CP network are its training and classification speed and its capability to extract statistical properties of the input data. The combined BP and CP networks can classify nucleic acid or protein sequences with a close to 100% accuracy at a rate of about one order of magnitude faster than other currently available methods.  相似文献   

19.
目的在多标签有监督学习框架中,构建具有较强泛化性能的分类器需要大量已标注训练样本,而实际应用中已标注样本少且获取代价十分昂贵。针对多标签图像分类中已标注样本数量不足和分类器再学习效率低的问题,提出一种结合主动学习的多标签图像在线分类算法。方法基于min-max理论,采用查询最具代表性和最具信息量的样本挑选策略主动地选择待标注样本,且基于KKT(Karush-Kuhn-Tucker)条件在线地更新多标签图像分类器。结果在4个公开的数据集上,采用4种多标签分类评价指标对本文算法进行评估。实验结果表明,本文采用的样本挑选方法比随机挑选样本方法和基于间隔的采样方法均占据明显优势;当分类器达到相同或相近的分类准确度时,利用本文的样本挑选策略选择的待标注样本数目要明显少于采用随机挑选样本方法和基于间隔的采样方法所需查询的样本数。结论本文算法一方面可以减少获取已标注样本所需的人工标注代价;另一方面也避免了传统的分类器重新训练时利用所有数据所产生的学习效率低下的问题,达到了当新数据到来时可实时更新分类器的目的。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号