首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a framework for incremental neural learning (INL) that allows a base neural learning system to incrementally learn new knowledge from only new data without forgetting the existing knowledge. Upon subsequent encounters of new data examples, INL utilizes prior knowledge to direct its incremental learning. A number of critical issues are addressed including when to make the system learn new knowledge, how to learn new knowledge without forgetting existing knowledge, how to perform inference using both the existing and the newly learnt knowledge, and how to detect and deal with aged learnt systems. To validate the proposed INL framework, we use backpropagation (BP) as a base learner and a multi-layer neural network as a base intelligent system. INL has several advantages over existing incremental algorithms: it can be applied to a broad range of neural network systems beyond the BP trained neural networks; it retains the existing neural network structures and weights even during incremental learning; the neural network committees generated by INL do not interact with one another and each sees the same inputs and error signals at the same time; this limited communication makes the INL architecture attractive for parallel implementation. We have applied INL to two vehicle fault diagnostics problems: end-of-line test in auto assembly plants and onboard vehicle misfire detection. These experimental results demonstrate that the INL framework has the capability to successfully perform incremental learning from unbalanced and noisy data. In order to show the general capabilities of INL, we also applied INL to three general machine learning benchmark data sets. The INL systems showed good generalization capabilities in comparison with other well known machine learning algorithms.  相似文献   

2.
Task-incremental learning (Task-IL) aims to enable an intelligent agent to continuously accumulate knowledge from new learning tasks without catastrophically forgetting what it has learned in the past. It has drawn increasing attention in recent years, with many algorithms being proposed to mitigate neural network forgetting. However, none of the existing strategies is able to completely eliminate the issues. Moreover, explaining and fully understanding what knowledge and how it is being forgotten during the incremental learning process still remains under-explored. In this paper, we propose KnowledgeDrift, a visual analytics framework, to interpret the network forgetting with three objectives: (1) to identify when the network fails to memorize the past knowledge, (2) to visualize what information has been forgotten, and (3) to diagnose how knowledge attained in the new model interferes with the one learned in the past. Our analytical framework first identifies the occurrence of forgetting by tracking the task performance under the incremental learning process and then provides in-depth inspections of drifted information via various levels of data granularity. KnowledgeDrift allows analysts and model developers to enhance their understanding of network forgetting and compare the performance of different incremental learning algorithms. Three case studies are conducted in the paper to further provide insights and guidance for users to effectively diagnose catastrophic forgetting over time.  相似文献   

3.
快速支持向量机增量学习算法   总被引:3,自引:0,他引:3  
支持向量机对数据的学习往往因为规模过大造成学习困难,增量学习通过把数据集分割成历史样本集和新增样本集,利用历史样本集的几何分布信息,通过定义样本的遗忘因子,提取历史样本集中的那些可能成为支持向量的边界向量进行初始训练.在增量学习过程中对学习样本的知识进行积累,有选择地淘汰学习样本.实验结果表明,该算法在保证学习的精度和推广能力的同时,提高了训练速度,适合于大规模分类和在线学习问题.  相似文献   

4.
针对现有很多文本分类算法必须进行训练-测试-再训练的缺点以及通用模型的语法表现度较差等问题,提出一种改进的模糊语法算法(IFGA)。首先根据一些选取的文本片段建立学习模型,为了适应轻微变化,采用增量式模型;然后将选取的文本片段转化到底层架构中,即模糊语法。最后利用模糊联合操作将单个文本片段语法进行结合,并将所学习的文本片段转化成更加一般的表示。与决策表算法、朴素贝叶斯等算法进行了两组对比实验,第一个实验结果表明IFGA和其他机器学生算法性能并无明显差异。第二个实验结果说明增量式学习算法比标准机器学习算法更加具有优势。其性能较平稳,数据的尺寸影响更小。另外,提出的算法具有较低的模型重新训练时间。  相似文献   

5.
In their unmodified form, lazy-learning algorithms may have difficulty learning and tracking time-varying input/output function maps such as those that occur in concept shift. Extensions of these algorithms, such as Time-Windowed forgetting (TWF), can permit learning of time-varying mappings by deleting older exemplars, but have decreased classification accuracy when the input-space sampling distribution of the learning set is time-varying. Additionally, TWF suffers from lower asymptotic classification accuracy than equivalent non-forgetting algorithms when the input sampling distributions are stationary. Other shift-sensitive algorithms, such as Locally-Weighted forgetting (LWF) avoid the negative effects of time-varying sampling distributions, but still have lower asymptotic classification in non-varying cases. We introduce Prediction Error Context Switching (PECS) which allows lazy-learning algorithms to have good classification accuracy in conditions having a time-varying function mapping and input sampling distributions, while still maintaining their asymptotic classification accuracy in static tasks. PECS works by selecting and re-activating previously stored instances based on their most recent consistency record. The classification accuracy and active learning set sizes for the above algorithms are compared in a set of learning tasks that illustrate the differing time-varying conditions described above. The results show that the PECS algorithm has the best overall classification accuracy over these differing time-varying conditions, while still having asymptotic classification accuracy competitive with unmodified lazy-learners intended for static environments.  相似文献   

6.
基于前馈神经网络的增量学习研究   总被引:1,自引:0,他引:1  
增量学习是一种在巩固原有学习成果和不需要用到原有数据的情况下快速有效地获取新知识的学习模式.本文阐述了基于前馈神经网络的增量学习原理,在此基础上对主要的增量学习算法进行了详细的介绍和分析,最后对增量学习研究进行了总结和展望.  相似文献   

7.
姜雪  陶亮  王华彬  武杰 《微机发展》2007,17(11):92-95
在增量学习过程中,随着训练集规模的增大,支持向量机的学习过程需要占用大量内存,寻优速度非常缓慢。在现有的一种支持向量机增量学习算法的基础上,结合并行学习思想,提出了一种分层并行筛选训练样本的支持向量机增量学习算法。理论分析和实验结果表明:与原有的算法相比,新算法能在保证支持向量机的分类能力的前提下显著提高训练速度。  相似文献   

8.
王玲  穆志纯  郭辉 《计算机工程》2007,33(10):19-21
针对生产实际中数据批量增加的情况,为了提高所建立的模型准确性和模型更新问题,提出了一种基于支持向量回归的批处理增量学习方法。算法通过对钢材力学性能预报建模的工业实例进行研究,结果表明,与传统的支持向量机增量学习算法相比,提高了模型的精度,具有良好的应用潜力。  相似文献   

9.
一种RBF网络结构调整的稳健增量学习方法   总被引:1,自引:1,他引:0  
刘建军  胡卫东  郁文贤 《计算机仿真》2009,26(7):192-194,227
以实现RBF网络的增量学习能力和提高其增量学习的稳健性为目的,给出了一种RBF网络增量学习算法.算法首先对初始数据集进行聚类得到初始的RBF网络结构,然后采用GAP-RBF算法中的隐层节点调整策略动态调整网络结构实现RBF网络增量学习.RBF网络的初始化降低了初始数据集样本训练顺序对RBF网络性能的影响,增强了其增量学习的稳健性.IRIS数据集和雷达实测数据集仿真实验表明,算法具有较好的增量学习能力.  相似文献   

10.
一种SVM增量学习算法α-ISVM   总被引:56,自引:0,他引:56       下载免费PDF全文
萧嵘  王继成  孙正兴  张福炎 《软件学报》2001,12(12):1818-1824
基于SVM(support vector machine)理论的分类算法,由于其完善的理论基础和良好的试验结果,目前已逐渐引起国内外研究者的关注.深入分析了SVM理论中SV(support vector,支持向量)集的特点,给出一种简单的SVM增量学习算法.在此基础上,进一步提出了一种基于遗忘因子α的SVM增量学习改进算法α-ISVM.该算法通过在增量学习中逐步积累样本的空间分布知识,使得对样本进行有选择地遗忘成为可能.理论分析和实验结果表明,该算法能在保证分类精度的同时,有效地提高训练速度并降低存储空间的占用.  相似文献   

11.
张明洋  闻英友  杨晓陶  赵宏 《控制与决策》2017,32(10):1887-1893
针对在线序贯极限学习机(OS-ELM)对增量数据学习效率低、准确性差的问题, 提出一种基于增量加权平均的在线序贯极限学习机(WOS-ELM)算法.将算法的原始数据训练模型残差与增量数据训练模型残差进行加权作为代价函数,推导出用于均衡原始数据与增量数据的训练模型,利用原始数据来弱化增量数据的波动,使在线极限学习机具有较好的稳定性,从而提高算法的学习效率和准确性. 仿真实验结果表明, 所提出的WOS-ELM算法对增量数据具有较好的预测精度和泛化能力.  相似文献   

12.
以支持向量机(SVM)为代表的人工智能技术在智能传感器系统中得到了广泛的应用,但传统的SVM有"灾难性遗忘"现象,即会遗忘以前学过的知识,并且不能增量学习新的数据,这已无法满足智能传感器系统实时性的要求。而Learn++算法能够增量地学习新来的数据,即使新来数据属于新的类,也不会遗忘已经学习到的旧知识。为了解决上述问题,提出了一种基于壳向量算法的Learn++集成方法。实验结果表明:该算法不但具有增量学习的能力,而且在保证分类精度的同时,提高了训练速度,减小了存储规模,可以满足当下智能传感器系统在线学习的需求。  相似文献   

13.
传统支持向量机是对小样本提出,对于大样本会出现训练速度慢、内存占用多等问题.并且不具有增量学习性能.而常用的增量学习方法又会出现局部极小等问题.本文阐述了一种改进的支持向量机算法(快速增量加权支持向量机算法)用于证券指数预测.该算法先对指数样本做相空间重构,再分解成若干个工作子集,针对样本重要程度给出不同权重构建预测模型.实验分析表明,在泛化精度保持略好情况下,训练速度明显提高.  相似文献   

14.
多任务多核学习已逐渐成为在线学习算法研究的热点。对于数据流的处理,现有的在线学习算法在准确性上有一定的欠缺,因此提出一种新的多任务多核在线学习模型用于提高数据流预测的准确性。在保持多任务多核学习的基础上,将其扩展到在线学习中,从而得到一个新的在线学习算法;同时为输入数据保持一定大小的数据窗口,用较小空间换取数据的完整性。实验部分对核函数的选取以及训练样本集的大小进行了较为详细的分析,通过对UCI数据和实际的机场客流量数据进行分析,很好地保障了流数据处理的准确性及实时性,有一定的实际应用价值。  相似文献   

15.
The ability to predict a student’s performance could be useful in a great number of different ways associated with university-level distance learning. Students’ marks in a few written assignments can constitute the training set for a supervised machine learning algorithm. Along with the explosive increase of data and information, incremental learning ability has become more and more important for machine learning approaches. The online algorithms try to forget irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms). Nowadays, combining classifiers is proposed as a new direction for the improvement of the classification accuracy. However, most ensemble algorithms operate in batch mode. Therefore a better proposal is an online ensemble of classifiers that combines an incremental version of Naive Bayes, the 1-NN and the WINNOW algorithms using the voting methodology. Among other significant conclusions it was found that the proposed algorithm is the most appropriate to be used for the construction of a software support tool.  相似文献   

16.
现实世界中存在着大量无类标的数据,如各种医疗图像数据、网页数据等。在大数据时代,这种情况更加突出。标注这些无类标的数据需要付出巨大的代价。主动学习是解决这一问题的有效手段,也是近几年机器学习和数据挖掘领域中的一个研究热点。提出了一种基于在线序列极限学习机的主动学习算法,该算法利用在线序列极限学习机增量学习的特点,可显著提高学习系统的效率。另外,该算法用样例熵作为启发式度量无类标样例的重要性,用K-近邻分类器作为Oracle标注选出的无类标样例的类别。实验结果显示,提出的算法具有学习速度快、标注准确的特点。  相似文献   

17.
针对典型的支持向量机增量学习算法对有用信息的丢失和现有支持向量机增量学习算法单纯追求分类器精准性的客观性,将三支决策损失函数的主观性引入支持向量机增量学习算法中,提出了一种基于三支决策的支持向量机增量学习方法.首先采用特征距离与中心距离的比值来计算三支决策中的条件概率;然后把三支决策中的边界域作为边界向量加入到原支持向量和新增样本中一起训练;最后,通过仿真实验证明,该方法不仅充分利用有用信息提高了分类准确性,而且在一定程度上修正了现有支持向量机增量学习算法的客观性,并解决了三支决策中条件概率的计算问题.  相似文献   

18.
针对目前室内指纹定位算法存在实时性差、对动态环境适应性不足的问题,提出一种新的基于半监督极限学习机的定位算法.该算法首先通过半监督极限学习机建立初始化位置估计模型,然后利用新增的半标记数据对原定位模型进行动态调整,最后为新增训练数据分配合适惩罚权重,使模型具有时效机制.仿真结果表明,该定位算法在保证定位实时性的同时提高了对动态环境的适应性.  相似文献   

19.
基于超球支持向量机的类增量学习算法研究   总被引:3,自引:1,他引:2  
提出了一种超球支持向量机类增量学习算法.对每一类样本,利用超球支持向量机在特征空间中求得包围该类尽可能多样本的最小超球,使各类样本之间通过超球隔开.类增量学习过程中,只对新增类样本进行训练,使得该算法在很小的样本集、很小的空间代价下实现了类增量学习,大大降低了训练时间,同时保留了历史训练结果.分类过程中,通过计算待分类样本到各超球球心的距离判定其所属类别,分类简单快捷.实验结果证明,该算法不仅具有较高的训练速度,而且具有较高的分类速度和分类精度.  相似文献   

20.
在如何从海量的数据中提取有用的信息上提出了一种新的SVM的增量学习算法.该算法基于KKT条件,通过研究支持向量分布特点,分析了新样本加入训练集后,支持向量集的变化情况,提出等势训练集的观点.能对训练数据进行有效的遗忘淘汰,使得学习对象的知识得到了积累.在理论分析和对旅游信息分类的应用结果表明,该算法能在保持分类精度的同时,有效得提高训练速度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号