首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 326 毫秒
1.
一般的在线学习算法对不平衡数据流的分类识别会遇到较大困难,特别是当数据流发生概念漂移时,对其进行分类会变得更困难.文中提出面向不平衡数据流的自适应加权在线超限学习机算法,自动调整实时到达的训练样本的惩罚参数,达到在线学习不平衡数据流的目的.文中算法可以适用于不同偏斜程度的静态数据流的在线学习和发生概念漂移时数据流的在线学习.理论分析和在多个真实数据流上的实验表明文中算法的正确性和有效性.  相似文献   

2.
In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance  相似文献   

3.
左鹏玉  周洁  王士同   《智能系统学报》2020,15(3):520-527
针对在线序列极限学习机对于类别不平衡数据的学习效率低、分类准确率差的问题,提出了面对类别不平衡的增量在线序列极限学习机(IOS-ELM)。该算法根据类别不平衡比例调整平衡因子,利用分块矩阵的广义逆矩阵对隐含层节点数进行寻优,提高了模型对类别不平衡数据的在线处理能力,最后通过14个二类和多类不平衡数据集对该算法有效性和可行性进行验证。实验结果表明:该算法与同类其他算法相比具有更好的泛化性和准确率,适用于类别不平衡场景下的在线学习。  相似文献   

4.
针对现有机器学习算法难以有效提高不均衡在线贯序数据中少类样本分类精度的问题,提出了一种基于主曲线的不均衡在线贯序极限学习机。该方法的核心思路是根据在线贯序数据的分布特性,均衡各类别样本,以减少少类样本合成过程中的盲目性,主要包括离线和在线两个阶段。离线阶段采用主曲线分别建立各类别样本的分布模型,利用少类样本合成过采样算法对少类样本过采样,并根据各样本点到对应主曲线的投影距离分别为其设定相应大小的隶属度,最后根据隶属区间削减多类和少类虚拟样本,进而建立初始模型。在线阶段对贯序到达的少类样本过采样,并根据隶属区间均衡贯序样本,进而动态更新网络权值。通过理论分析证明了所提算法在理论上存在损失信息上界。采用UCI标准数据集和实际澳门气象数据进行仿真实验,结果表明,与现有典型算法相比,该算法对少类样本的预测精度更高,数值稳定性更好。  相似文献   

5.
The classification of imbalanced data is a major challenge for machine learning. In this paper, we presented a fuzzy total margin based support vector machine (FTM-SVM) method to handle the class imbalance learning (CIL) problem in the presence of outliers and noise. The proposed method incorporates total margin algorithm, different cost functions and the proper approach of fuzzification of the penalty into FTM-SVM and formulates them in nonlinear case. We considered an excellent type of fuzzy membership functions to assign fuzzy membership values and got six FTM-SVM settings. We evaluated the proposed FTM-SVM method on two artificial data sets and 16 real-world imbalanced data sets. Experimental results show that the proposed FTM-SVM method has higher G_Mean and F_Measure values than some existing CIL methods. Based on the overall results, we can conclude that the proposed FTM-SVM method is effective for CIL problem, especially in the presence of outliers and noise in data sets.  相似文献   

6.
Ensemble of online sequential extreme learning machine   总被引:3,自引:0,他引:3  
Yuan  Yeng Chai  Guang-Bin   《Neurocomputing》2009,72(13-15):3391
Liang et al. [A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006), 1411–1423] has proposed an online sequential learning algorithm called online sequential extreme learning machine (OS-ELM), which can learn the data one-by-one or chunk-by-chunk with fixed or varying chunk size. It has been shown [Liang et al., A fast and accurate online sequential learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 17 (6) (2006) 1411–1423] that OS-ELM runs much faster and provides better generalization performance than other popular sequential learning algorithms. However, we find that the stability of OS-ELM can be further improved. In this paper, we propose an ensemble of online sequential extreme learning machine (EOS-ELM) based on OS-ELM. The results show that EOS-ELM is more stable and accurate than the original OS-ELM.  相似文献   

7.
针对现有学习算法难以有效提高不均衡在线贯序数据中少类样本分类精度的问题,提出一种基于不均衡样本重构的加权在线贯序极限学习机。该算法从提取在线贯序数据的分布特性入手,主要包括离线和在线两个阶段:离线阶段主要采用主曲线构建少类样本的可信区域,并通过对该区域内样本进行过采样,来构建符合样本分布趋势的均衡样本集,进而建立初始模型;而在线阶段则对贯序到达的数据根据训练误差赋予各样本相应权重,同时动态更新网络权值。采用UCI标准数据集和澳门实测气象数据进行实验对比,结果表明,与现有在线贯序极限学习机(OS-ELM)、极限学习机(ELM)和元认知在线贯序极限学习机(MCOS-ELM)相比,所提算法对少类样本的识别能力更高,且所提算法的模型训练时间与其他三种算法相差不大。结果表明在不影响算法复杂度的情况下,所提算法能有效提高少类样本的分类精度。  相似文献   

8.
张明洋  闻英友  杨晓陶  赵宏 《控制与决策》2017,32(10):1887-1893
针对在线序贯极限学习机(OS-ELM)对增量数据学习效率低、准确性差的问题, 提出一种基于增量加权平均的在线序贯极限学习机(WOS-ELM)算法.将算法的原始数据训练模型残差与增量数据训练模型残差进行加权作为代价函数,推导出用于均衡原始数据与增量数据的训练模型,利用原始数据来弱化增量数据的波动,使在线极限学习机具有较好的稳定性,从而提高算法的学习效率和准确性. 仿真实验结果表明, 所提出的WOS-ELM算法对增量数据具有较好的预测精度和泛化能力.  相似文献   

9.
非平衡数据训练方法概述   总被引:7,自引:0,他引:7  
张琦  吴斌  王柏 《计算机科学》2005,32(10):181-186
现实世界中数据分类的应用通常会遇到数据非平衡的问题,即数据中的一类样本在数量上远多于另一类,例如欺诈检测和文本分类问题等.其中少数类的样本通常具有巨大的影响力和价值,是我们主要关心的对象,称为正类,另一类则称为负类.正类样本与负类样本可能数量上相差极大,这给训练非平衡数据提出了挑战.传统机器训练算法可能会产生偏向多数类的结果,因而对于正类来说,预测的性能可能会很差.本文分析了导致非平衡数据分类性能差的多方面原因,并针对这些原因列出了多种解决方法.  相似文献   

10.
针对现有机器学习算法难以有效提高贯序不均衡数据分类问题中少类样本分类精度的问题,提出一种基于混合采样策略的在线贯序极限学习机。该算法可在提高少类样本分类精度的前提下,减少多类样本的分类精度损失,主要包括离线和在线两个阶段:离线阶段采用均衡采样策略,利用主曲线分别构建多类和少类样本的可信区域,在不改变样本分布特性的前提下,利用可信区域扩充少类样本和削减多类样本,进而得到均衡的离线样本集,建立初始模型;在线阶段仅对贯序到达的多类数据进行欠采样,根据样本重要度挑选最具价值的多类样本,进而动态更新网络权值。通过理论分析证明所提算法在理论上存在损失信息上界。采用UCI标准数据集和实际的澳门空气污染预报数据进行仿真实验,结果表明,与现有在线贯序极限学习机(OS-ELM)、极限学习机(ELM)和元认知在线贯序极限学习机(MCOS-ELM)算法相比,所提算法对少类样本的预测精度更高,且数值稳定性良好。  相似文献   

11.
This paper introduces a setting for multiclass online learning with limited feedback and its application to utterance classification. In this learning setting, a parameter k limits the number of choices presented for selection by the environment (e.g. by the user in the case of an interactive spoken system) during each trial of the online learning sequence. New versions of standard additive and multiplicative weight update algorithms for online learning are presented that are more suited to the limited feedback setting, while sharing the efficiency advantages of the standard ones. The algorithms are evaluated on an utterance classification task in two domains. In this utterance classification task, no training material for the domain is provided (for training the speech recognizer or classifier) prior to the start of online learning. We present experiments on the effect of varying k and the weight update algorithms on the learning curve for online utterance classification. In these experiments, the new online learning algorithms improve classification accuracy compared with the standard ones. The methods presented are directly relevant to applications such as building call routing systems that adapt from feedback rather than being trained in batch mode.Editors: Dan Roth and Pascale FungThe work reported in this paper was carried out while the author was at AT&T Labs.  相似文献   

12.
机器学习技术成功地应用于计算机视觉、自然语言处理和语音识别等众多领域.然而,现有的大多数机器学习模型在部署后类别和参数是固定的,只能泛化到训练集中出现的类别,无法增量式地学习新类别.在实际应用中,新的类别或任务会源源不断地出现,这要求模型能够像人类一样在较好地保持已有类别知识的基础上持续地学习新类别知识.近年来新兴的类别增量学习研究方向,旨在使得模型能够在开放、动态的环境中持续学习新类别的同时保持对旧类别的判别能力(防止“灾难性遗忘”).本文对类别增量学习(Class-incremental learning, CIL)方法进行了详细综述.根据克服遗忘的技术思路,将现有方法分为基于参数正则化、基于知识蒸馏、基于数据回放、基于特征回放和基于网络结构的五类方法,对每类方法的优缺点进行了总结.此外,本文在常用数据集上对代表性方法进行了实验评估,并通过实验结果对现有算法的性能进行了比较分析.最后,对类别增量学习的研究趋势进行展望.  相似文献   

13.
概念漂移是数据流学习领域中的一个难点问题,同时数据流中存在的类不平衡问题也会严重影响算法的分类性能。针对概念漂移和类不平衡的联合问题,在基于数据块集成的方法上引入在线更新机制,结合重采样和遗忘机制提出了一种增量加权集成的不平衡数据流分类方法(incremental weighted ensemble for imbalance learning, IWEIL)。该方法以集成框架为基础,利用基于可变大小窗口的遗忘机制确定基分类器对窗口内最近若干实例的分类性能,并计算基分类器的权重,随着新实例的逐个到达,在线更新IWEIL中每个基分器及其权重。同时,使用改进的自适应最近邻SMOTE方法生成符合新概念的新少数类实例以解决数据流中类不平衡问题。在人工数据集和真实数据集上进行实验,结果表明,相比于DWMIL算法,IWEIL在HyperPlane数据集上的G-mean和recall指标分别提升了5.77%和6.28%,在Electricity数据集上两个指标分别提升了3.25%和6.47%。最后,IWEIL在安卓应用检测问题上表现良好。  相似文献   

14.
现有概念漂移处理算法在检测到概念漂移发生后,通常需要在新到概念上重新训练分类器,同时“遗忘”以往训练的分类器。在概念漂移发生初期,由于能够获取到的属于新到概念的样本较少,导致新建的分类器在短时间内无法得到充分训练,分类性能通常较差。进一步,现有的基于在线迁移学习的数据流分类算法仅能使用单个分类器的知识辅助新到概念进行学习,在历史概念与新到概念相似性较差时,分类模型的分类准确率不理想。针对以上问题,文中提出一种能够利用多个历史分类器知识的数据流分类算法——CMOL。CMOL算法采取分类器权重动态调节机制,根据分类器的权重对分类器池进行更新,使得分类器池能够尽可能地包含更多的概念。实验表明,相较于其他相关算法,CMOL算法能够在概念漂移发生时更快地适应新到概念,显示出更高的分类准确率。  相似文献   

15.
李琦  谢珺  张喆  董俊杰  续欣莹 《计算机工程》2021,47(7):67-73,80
单一模态包含的物体信息有限,导致在物体材质识别分类中表现不佳,而传统多模态融合方法在样本训练过程中需要输入所有数据。提出一种多模态的多尺度局部感受野在线序列极限学习机方法。对物体不同模态样本运用改进的特征提取框架,利用多尺度局部感受野感知样本信息提取特征,并将不同模态特征融合后通过在线序列极限学习机进行训练学习。在线序列极限学习机在训练过程中增量式地输入样本进行训练,当有新数据需要训练时无需对所有数据重新训练。在TUM触觉纹理数据库上进行验证,实验结果表明,多模态融合的分类精度高于单模态的分类精度,且改进的特征提取框架可以显著提升分类性能。  相似文献   

16.
Imbalance classification techniques have been frequently applied in many machine learning application domains where the number of the majority (or positive) class of a dataset is much larger than that of the minority (or negative) class. Meanwhile, feature selection (FS) is one of the key techniques for the high-dimensional classification task in a manner which greatly improves the classification performance and the computational efficiency. However, most studies of feature selection and imbalance classification are restricted to off-line batch learning, which is not well adapted to some practical scenarios. In this paper, we aim to solve high-dimensional imbalanced classification problem accurately and efficiently with only a small number of active features in an online fashion, and we propose two novel online learning algorithms for this purpose. In our approach, a classifier which involves only a small and fixed number of features is constructed to classify a sequence of imbalanced data received in an online manner. We formulate the construction of such online learner into an optimization problem and use an iterative approach to solve the problem based on the passive-aggressive (PA) algorithm as well as a truncated gradient (TG) method. We evaluate the performance of the proposed algorithms based on several real-world datasets, and our experimental results have demonstrated the effectiveness of the proposed algorithms in comparison with the baselines.  相似文献   

17.
Most research of class imbalance is focused on two class problem to date. A multi-class imbalance is so complicated that one has little knowledge and experience in Internet traffic classification. In this paper we study the challenges posed by Internet traffic classification using machine learning with multi-class unbalanced data and the ability of some adjusting methods, including resampling (random under-sampling, random over-sampling) and cost-sensitive learning. Then we empirically compare the effectiveness of these methods for Internet traffic classification and determine which produces better overall classifier and under what circumstances. Main works are as below. (1) Cost-sensitive learning is deduced with MetaCost that incorporates the misclassification costs into the learning algorithm for improving multi-class imbalance based on flow ratio. (2) A new resampling model is presented including under-sampling and over-sampling to make the multi-class training data more balanced. (3) The solution is presented to compare among three methods or to compare three methods with original case. Experiment results are shown on sixteen datasets that flow g-mean and byte g-mean are statistically increased by 8.6 % and 3.7 %; 4.4 % and 2.8 %; 11.1 % and 8.2 % when three methods are compared with original case. Cost-sensitive learning is as the first choice when the sample size is enough, but resampling is more practical in the rest.  相似文献   

18.
Classification with imbalanced datasets supposes a new challenge for researches in the framework of machine learning. This problem appears when the number of patterns that represents one of the classes of the dataset (usually the concept of interest) is much lower than in the remaining classes. Thus, the learning model must be adapted to this situation, which is very common in real applications. In this paper, a dynamic over-sampling procedure is proposed for improving the classification of imbalanced datasets with more than two classes. This procedure is incorporated into a memetic algorithm (MA) that optimizes radial basis functions neural networks (RBFNNs). To handle class imbalance, the training data are resampled in two stages. In the first stage, an over-sampling procedure is applied to the minority class to balance in part the size of the classes. Then, the MA is run and the data are over-sampled in different generations of the evolution, generating new patterns of the minimum sensitivity class (the class with the worst accuracy for the best RBFNN of the population). The methodology proposed is tested using 13 imbalanced benchmark classification datasets from well-known machine learning problems and one complex problem of microbial growth. It is compared to other neural network methods specifically designed for handling imbalanced data. These methods include different over-sampling procedures in the preprocessing stage, a threshold-moving method where the output threshold is moved toward inexpensive classes and ensembles approaches combining the models obtained with these techniques. The results show that our proposal is able to improve the sensitivity in the generalization set and obtains both a high accuracy level and a good classification level for each class.  相似文献   

19.
Online learning algorithms have been preferred in many applications due to their ability to learn by the sequentially arriving data. One of the effective algorithms recently proposed for training single hidden-layer feedforward neural networks (SLFNs) is online sequential extreme learning machine (OS-ELM), which can learn data one-by-one or chunk-by-chunk at fixed or varying sizes. It is based on the ideas of extreme learning machine (ELM), in which the input weights and hidden layer biases are randomly chosen and then the output weights are determined by the pseudo-inverse operation. The learning speed of this algorithm is extremely high. However, it is not good to yield generalization models for noisy data and is difficult to initialize parameters in order to avoid singular and ill-posed problems. In this paper, we propose an improvement of OS-ELM based on the bi-objective optimization approach. It tries to minimize the empirical error and obtain small norm of network weight vector. Singular and ill-posed problems can be overcome by using the Tikhonov regularization. This approach is also able to learn data one-by-one or chunk-by-chunk. Experimental results show the better generalization performance of the proposed approach on benchmark datasets.  相似文献   

20.
李克文  杨磊  刘文英  刘璐  刘洪太 《计算机科学》2015,42(9):249-252, 267
不平衡数据的分类问题在多个应用领域中普遍存在,已成为数据挖掘和机器学习领域的研究热点。提出了一种新的不平衡数据分类方法RSBoost,以解决传统分类方法对于少数类识别率不高和分类效率低的问题。该方法采用SMOTE方法对少数类进行过采样处理,然后对整个数据集进行随机欠采样处理,以改善整个数据集的不平衡性,再将其与Boosting算法相结合来对数据进行分类。通过实验对比了5种方法在多个公共数据集上的分类效果和分类效率,结果表明该方法具有较高的分类识别率和分类效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号