共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a framework for incremental neural learning (INL) that allows a base neural learning system to incrementally
learn new knowledge from only new data without forgetting the existing knowledge. Upon subsequent encounters of new data examples,
INL utilizes prior knowledge to direct its incremental learning. A number of critical issues are addressed including when
to make the system learn new knowledge, how to learn new knowledge without forgetting existing knowledge, how to perform inference
using both the existing and the newly learnt knowledge, and how to detect and deal with aged learnt systems. To validate the
proposed INL framework, we use backpropagation (BP) as a base learner and a multi-layer neural network as a base intelligent
system. INL has several advantages over existing incremental algorithms: it can be applied to a broad range of neural network
systems beyond the BP trained neural networks; it retains the existing neural network structures and weights even during incremental
learning; the neural network committees generated by INL do not interact with one another and each sees the same inputs and
error signals at the same time; this limited communication makes the INL architecture attractive for parallel implementation.
We have applied INL to two vehicle fault diagnostics problems: end-of-line test in auto assembly plants and onboard vehicle
misfire detection. These experimental results demonstrate that the INL framework has the capability to successfully perform
incremental learning from unbalanced and noisy data. In order to show the general capabilities of INL, we also applied INL
to three general machine learning benchmark data sets. The INL systems showed good generalization capabilities in comparison
with other well known machine learning algorithms. 相似文献
2.
Task-incremental learning (Task-IL) aims to enable an intelligent agent to continuously accumulate knowledge from new learning tasks without catastrophically forgetting what it has learned in the past. It has drawn increasing attention in recent years, with many algorithms being proposed to mitigate neural network forgetting. However, none of the existing strategies is able to completely eliminate the issues. Moreover, explaining and fully understanding what knowledge and how it is being forgotten during the incremental learning process still remains under-explored. In this paper, we propose KnowledgeDrift, a visual analytics framework, to interpret the network forgetting with three objectives: (1) to identify when the network fails to memorize the past knowledge, (2) to visualize what information has been forgotten, and (3) to diagnose how knowledge attained in the new model interferes with the one learned in the past. Our analytical framework first identifies the occurrence of forgetting by tracking the task performance under the incremental learning process and then provides in-depth inspections of drifted information via various levels of data granularity. KnowledgeDrift allows analysts and model developers to enhance their understanding of network forgetting and compare the performance of different incremental learning algorithms. Three case studies are conducted in the paper to further provide insights and guidance for users to effectively diagnose catastrophic forgetting over time. 相似文献
3.
快速支持向量机增量学习算法 总被引:3,自引:0,他引:3
支持向量机对数据的学习往往因为规模过大造成学习困难,增量学习通过把数据集分割成历史样本集和新增样本集,利用历史样本集的几何分布信息,通过定义样本的遗忘因子,提取历史样本集中的那些可能成为支持向量的边界向量进行初始训练.在增量学习过程中对学习样本的知识进行积累,有选择地淘汰学习样本.实验结果表明,该算法在保证学习的精度和推广能力的同时,提高了训练速度,适合于大规模分类和在线学习问题. 相似文献
4.
针对现有很多文本分类算法必须进行训练-测试-再训练的缺点以及通用模型的语法表现度较差等问题,提出一种改进的模糊语法算法(IFGA)。首先根据一些选取的文本片段建立学习模型,为了适应轻微变化,采用增量式模型;然后将选取的文本片段转化到底层架构中,即模糊语法。最后利用模糊联合操作将单个文本片段语法进行结合,并将所学习的文本片段转化成更加一般的表示。与决策表算法、朴素贝叶斯等算法进行了两组对比实验,第一个实验结果表明IFGA和其他机器学生算法性能并无明显差异。第二个实验结果说明增量式学习算法比标准机器学习算法更加具有优势。其性能较平稳,数据的尺寸影响更小。另外,提出的算法具有较低的模型重新训练时间。 相似文献
5.
Tolerating Concept and Sampling Shift in Lazy Learning Using Prediction Error Context Switching 总被引:2,自引:0,他引:2
MARCOS Salganicoff 《Artificial Intelligence Review》1997,11(1-5):133-155
In their unmodified form, lazy-learning algorithms may have difficulty learning and tracking time-varying input/output function maps such as those that occur in concept shift. Extensions of these algorithms, such as Time-Windowed forgetting (TWF), can permit learning of time-varying mappings by deleting older exemplars, but have decreased classification accuracy when the input-space sampling distribution of the learning set is time-varying. Additionally, TWF suffers from lower asymptotic classification accuracy than equivalent non-forgetting algorithms when the input sampling distributions are stationary. Other shift-sensitive algorithms, such as Locally-Weighted forgetting (LWF) avoid the negative effects of time-varying sampling distributions, but still have lower asymptotic classification in non-varying cases. We introduce Prediction Error Context Switching (PECS) which allows lazy-learning algorithms to have good classification accuracy in conditions having a time-varying function mapping and input sampling distributions, while still maintaining their asymptotic classification accuracy in static tasks. PECS works by selecting and re-activating previously stored instances based on their most recent consistency record. The classification accuracy and active learning set sizes for the above algorithms are compared in a set of learning tasks that illustrate the differing time-varying conditions described above. The results show that the PECS algorithm has the best overall classification accuracy over these differing time-varying conditions, while still having asymptotic classification accuracy competitive with unmodified lazy-learners intended for static environments. 相似文献
6.
7.
8.
9.
10.
基于SVM(support vector machine)理论的分类算法,由于其完善的理论基础和良好的试验结果,目前已逐渐引起国内外研究者的关注.深入分析了SVM理论中SV(support vector,支持向量)集的特点,给出一种简单的SVM增量学习算法.在此基础上,进一步提出了一种基于遗忘因子α的SVM增量学习改进算法α-ISVM.该算法通过在增量学习中逐步积累样本的空间分布知识,使得对样本进行有选择地遗忘成为可能.理论分析和实验结果表明,该算法能在保证分类精度的同时,有效地提高训练速度并降低存储空间的占用. 相似文献
11.
12.
以支持向量机(SVM)为代表的人工智能技术在智能传感器系统中得到了广泛的应用,但传统的SVM有"灾难性遗忘"现象,即会遗忘以前学过的知识,并且不能增量学习新的数据,这已无法满足智能传感器系统实时性的要求。而Learn++算法能够增量地学习新来的数据,即使新来数据属于新的类,也不会遗忘已经学习到的旧知识。为了解决上述问题,提出了一种基于壳向量算法的Learn++集成方法。实验结果表明:该算法不但具有增量学习的能力,而且在保证分类精度的同时,提高了训练速度,减小了存储规模,可以满足当下智能传感器系统在线学习的需求。 相似文献
13.
14.
多任务多核学习已逐渐成为在线学习算法研究的热点。对于数据流的处理,现有的在线学习算法在准确性上有一定的欠缺,因此提出一种新的多任务多核在线学习模型用于提高数据流预测的准确性。在保持多任务多核学习的基础上,将其扩展到在线学习中,从而得到一个新的在线学习算法;同时为输入数据保持一定大小的数据窗口,用较小空间换取数据的完整性。实验部分对核函数的选取以及训练样本集的大小进行了较为详细的分析,通过对UCI数据和实际的机场客流量数据进行分析,很好地保障了流数据处理的准确性及实时性,有一定的实际应用价值。 相似文献
15.
The ability to predict a student’s performance could be useful in a great number of different ways associated with university-level distance learning. Students’ marks in a few written assignments can constitute the training set for a supervised machine learning algorithm. Along with the explosive increase of data and information, incremental learning ability has become more and more important for machine learning approaches. The online algorithms try to forget irrelevant information instead of synthesizing all available information (as opposed to classic batch learning algorithms). Nowadays, combining classifiers is proposed as a new direction for the improvement of the classification accuracy. However, most ensemble algorithms operate in batch mode. Therefore a better proposal is an online ensemble of classifiers that combines an incremental version of Naive Bayes, the 1-NN and the WINNOW algorithms using the voting methodology. Among other significant conclusions it was found that the proposed algorithm is the most appropriate to be used for the construction of a software support tool. 相似文献
16.
现实世界中存在着大量无类标的数据,如各种医疗图像数据、网页数据等。在大数据时代,这种情况更加突出。标注这些无类标的数据需要付出巨大的代价。主动学习是解决这一问题的有效手段,也是近几年机器学习和数据挖掘领域中的一个研究热点。提出了一种基于在线序列极限学习机的主动学习算法,该算法利用在线序列极限学习机增量学习的特点,可显著提高学习系统的效率。另外,该算法用样例熵作为启发式度量无类标样例的重要性,用K-近邻分类器作为Oracle标注选出的无类标样例的类别。实验结果显示,提出的算法具有学习速度快、标注准确的特点。 相似文献
17.
针对典型的支持向量机增量学习算法对有用信息的丢失和现有支持向量机增量学习算法单纯追求分类器精准性的客观性,将三支决策损失函数的主观性引入支持向量机增量学习算法中,提出了一种基于三支决策的支持向量机增量学习方法.首先采用特征距离与中心距离的比值来计算三支决策中的条件概率;然后把三支决策中的边界域作为边界向量加入到原支持向量和新增样本中一起训练;最后,通过仿真实验证明,该方法不仅充分利用有用信息提高了分类准确性,而且在一定程度上修正了现有支持向量机增量学习算法的客观性,并解决了三支决策中条件概率的计算问题. 相似文献
18.
针对目前室内指纹定位算法存在实时性差、对动态环境适应性不足的问题,提出一种新的基于半监督极限学习机的定位算法.该算法首先通过半监督极限学习机建立初始化位置估计模型,然后利用新增的半标记数据对原定位模型进行动态调整,最后为新增训练数据分配合适惩罚权重,使模型具有时效机制.仿真结果表明,该定位算法在保证定位实时性的同时提高了对动态环境的适应性. 相似文献
19.
基于超球支持向量机的类增量学习算法研究 总被引:3,自引:1,他引:2
提出了一种超球支持向量机类增量学习算法.对每一类样本,利用超球支持向量机在特征空间中求得包围该类尽可能多样本的最小超球,使各类样本之间通过超球隔开.类增量学习过程中,只对新增类样本进行训练,使得该算法在很小的样本集、很小的空间代价下实现了类增量学习,大大降低了训练时间,同时保留了历史训练结果.分类过程中,通过计算待分类样本到各超球球心的距离判定其所属类别,分类简单快捷.实验结果证明,该算法不仅具有较高的训练速度,而且具有较高的分类速度和分类精度. 相似文献
20.
在如何从海量的数据中提取有用的信息上提出了一种新的SVM的增量学习算法.该算法基于KKT条件,通过研究支持向量分布特点,分析了新样本加入训练集后,支持向量集的变化情况,提出等势训练集的观点.能对训练数据进行有效的遗忘淘汰,使得学习对象的知识得到了积累.在理论分析和对旅游信息分类的应用结果表明,该算法能在保持分类精度的同时,有效得提高训练速度. 相似文献