首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a framework for incremental neural learning (INL) that allows a base neural learning system to incrementally learn new knowledge from only new data without forgetting the existing knowledge. Upon subsequent encounters of new data examples, INL utilizes prior knowledge to direct its incremental learning. A number of critical issues are addressed including when to make the system learn new knowledge, how to learn new knowledge without forgetting existing knowledge, how to perform inference using both the existing and the newly learnt knowledge, and how to detect and deal with aged learnt systems. To validate the proposed INL framework, we use backpropagation (BP) as a base learner and a multi-layer neural network as a base intelligent system. INL has several advantages over existing incremental algorithms: it can be applied to a broad range of neural network systems beyond the BP trained neural networks; it retains the existing neural network structures and weights even during incremental learning; the neural network committees generated by INL do not interact with one another and each sees the same inputs and error signals at the same time; this limited communication makes the INL architecture attractive for parallel implementation. We have applied INL to two vehicle fault diagnostics problems: end-of-line test in auto assembly plants and onboard vehicle misfire detection. These experimental results demonstrate that the INL framework has the capability to successfully perform incremental learning from unbalanced and noisy data. In order to show the general capabilities of INL, we also applied INL to three general machine learning benchmark data sets. The INL systems showed good generalization capabilities in comparison with other well known machine learning algorithms.  相似文献   

2.
基于前馈神经网络的增量学习研究   总被引:1,自引:0,他引:1  
增量学习是一种在巩固原有学习成果和不需要用到原有数据的情况下快速有效地获取新知识的学习模式.本文阐述了基于前馈神经网络的增量学习原理,在此基础上对主要的增量学习算法进行了详细的介绍和分析,最后对增量学习研究进行了总结和展望.  相似文献   

3.
针对深度增量学习可能产生灾难性遗忘的问题,提出双分支迭代的深度增量图像分类方法,使用主网络存储旧类知识,分支网络学习增量数据中的新类知识,并在增量过程中使用主网络的权重优化分支网络的参数.使用基于密度峰值聚类的方法从迭代数据集中筛选典型样本并构建保留集,并加入增量迭代训练中,减轻灾难性遗忘.实验表明,文中方法的性能较优.  相似文献   

4.
Incremental backpropagation learning networks   总被引:2,自引:0,他引:2  
How to learn new knowledge without forgetting old knowledge is a key issue in designing an incremental-learning neural network. In this paper, we present a new incremental learning method for pattern recognition, called the "incremental backpropagation learning network", which employs bounded weight modification and structural adaptation learning rules and applies initial knowledge to constrain the learning process. The viability of this approach is demonstrated for classification problems including the iris and the promoter domains.  相似文献   

5.
Catastrophic forgetting of learned knowledges and distribution discrepancy of different data are two key problems within fault diagnosis fields of rotating machinery. However, existing intelligent fault diagnosis methods generally tackle either the catastrophic forgetting problem or the domain adaptation problem. In complex industrial environments, both the catastrophic forgetting problem and the domain adaptation problem will occur simultaneously, which is termed as continual transfer problem. Therefore, it is necessary to investigate a more practical and challenging task where the number of fault categories are constantly increasing with industrial streaming data under varying operation conditions. To address the continual transfer problem, a novel framework named deep continual transfer learning network with dynamic weight aggregation (DCTLN-DWA) is proposed in this study. The DWA module is used to retain the diagnostic knowledge learned from previous phases and learn new knowledge from the new samples. The adversarial training strategy is applied to eliminate the data distribution discrepancy between source and target domains. The effectiveness of the proposed framework is investigated on an automobile transmission dataset. The experimental results demonstrate that the proposed framework can effectively handle the industrial streaming data under different working conditions and can be utilized as a promising tool for solving actual industrial problem.  相似文献   

6.
Recent studies on human learning reveal that self-regulated learning in a metacognitive framework is the best strategy for efficient learning. As the machine learning algorithms are inspired by the principles of human learning, one needs to incorporate the concept of metacognition to develop efficient machine learning algorithms. In this letter we present a metacognitive learning framework that controls the learning process of a fully complex-valued radial basis function network and is referred to as a metacognitive fully complex-valued radial basis function (Mc-FCRBF) network. Mc-FCRBF has two components: a cognitive component containing the FC-RBF network and a metacognitive component, which regulates the learning process of FC-RBF. In every epoch, when a sample is presented to Mc-FCRBF, the metacognitive component decides what to learn, when to learn, and how to learn based on the knowledge acquired by the FC-RBF network and the new information contained in the sample. The Mc-FCRBF learning algorithm is described in detail, and both its approximation and classification abilities are evaluated using a set of benchmark and practical problems. Performance results indicate the superior approximation and classification performance of Mc-FCRBF compared to existing methods in the literature.  相似文献   

7.
Recent machine learning challenges require the capability of learning in non-stationary environments. These challenges imply the development of new algorithms that are able to deal with changes in the underlying problem to be learnt. These changes can be gradual or trend changes, abrupt changes and recurring contexts. As the dynamics of the changes can be very different, existing machine learning algorithms exhibit difficulties to cope with them. Several methods using, for instance, ensembles or variable length windowing have been proposed to approach this task.In this work we propose a new method, for single-layer neural networks, that is based on the introduction of a forgetting function in an incremental online learning algorithm. This forgetting function gives a monotonically increasing importance to new data. Due to the combination of incremental learning and increasing importance assignment the network forgets rapidly in the presence of changes while maintaining a stable behavior when the context is stationary.The performance of the method has been tested over several regression and classification problems and its results compared with those of previous works. The proposed algorithm has demonstrated high adaptation to changes while maintaining a low consumption of computational resources.  相似文献   

8.
一种RBF网络结构调整的稳健增量学习方法   总被引:1,自引:1,他引:0  
刘建军  胡卫东  郁文贤 《计算机仿真》2009,26(7):192-194,227
以实现RBF网络的增量学习能力和提高其增量学习的稳健性为目的,给出了一种RBF网络增量学习算法.算法首先对初始数据集进行聚类得到初始的RBF网络结构,然后采用GAP-RBF算法中的隐层节点调整策略动态调整网络结构实现RBF网络增量学习.RBF网络的初始化降低了初始数据集样本训练顺序对RBF网络性能的影响,增强了其增量学习的稳健性.IRIS数据集和雷达实测数据集仿真实验表明,算法具有较好的增量学习能力.  相似文献   

9.
We explore incremental assimilation of new knowledge by sequential learning. Of particular interest is how a network of many knowledge layers can be constructed in an on-line manner, such that the learned units represent building blocks of knowledge that serve to compress the overall representation and facilitate transfer. We motivate the need for many layers of knowledge, and we advocate sequential learning as an avenue for promoting the construction of layered knowledge structures. Finally, our novel STL algorithm demonstrates a method for simultaneously acquiring and organizing a collection of concepts and functions as a network from a stream of unstructured information.  相似文献   

10.
知识追踪任务旨在根据学生历史学习行为实时追踪学生知识水平变化,并且预测学生在未来学习表现.在学生学习过程中,学习行为与遗忘行为相互交织,学生的遗忘行为对知识追踪影响很大.为了准确建模知识追踪中学习与遗忘行为,本文提出了一个兼顾学习与遗忘行为的深度知识追踪模型LFKT.LFKT模型综合考虑了四个影响知识遗忘因素,包括学生重复学习知识点的间隔时间、重复学习知识点的次数、顺序学习间隔时间以及学生对于知识点的掌握程度.结合遗忘因素,LFKT采用深度神经网络,利用学生答题结果作为知识追踪过程中知识掌握程度的间接反馈,建模融合学习与遗忘行为的知识追踪模型.通过在真实在线教育数据集上的实验,与当前知识追踪模型相比,LFKT可以更好地追踪学生知识掌握状态,并具有较好的预测性能.  相似文献   

11.
Lewis signalling games illustrate how language might evolve from random behaviour. The probability of evolving an optimal signalling language is, in part, a function of what learning strategy the agents use. Here we investigate three learning strategies, each of which allows agents to forget old experience. In each case, we find that forgetting increases the probability of evolving an optimal language. It does this by making it less likely that past partial success will continue to reinforce suboptimal practice. The learning strategies considered here show how forgetting past experience can promote learning in the context of games with suboptimal equilibria.  相似文献   

12.
以支持向量机(SVM)为代表的人工智能技术在智能传感器系统中得到了广泛的应用,但传统的SVM有"灾难性遗忘"现象,即会遗忘以前学过的知识,并且不能增量学习新的数据,这已无法满足智能传感器系统实时性的要求。而Learn++算法能够增量地学习新来的数据,即使新来数据属于新的类,也不会遗忘已经学习到的旧知识。为了解决上述问题,提出了一种基于壳向量算法的Learn++集成方法。实验结果表明:该算法不但具有增量学习的能力,而且在保证分类精度的同时,提高了训练速度,减小了存储规模,可以满足当下智能传感器系统在线学习的需求。  相似文献   

13.
A realtime online learning system with capacity limits needs to gradually forget old information in order to avoid catastrophic forgetting. This can be achieved by allowing new information to overwrite old, as in a so-called palimpsest memory. This paper describes an incremental learning rule based on the Bayesian confidence propagation neural network that has palimpsest properties when employed in an attractor neural network. The network does not suffer from catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits faster convergence for newer patterns.  相似文献   

14.
The proliferation of networked data in various disciplines motivates a surge of research interests on network or graph mining. Among them, node classification is a typical learning task that focuses on exploiting the node interactions to infer the missing labels of unlabeled nodes in the network. A vast majority of existing node classification algorithms overwhelmingly focus on static networks and they assume the whole network structure is readily available before performing learning algorithms. However, it is not the case in many real-world scenarios where new nodes and new links are continuously being added in the network. Considering the streaming nature of networks, we study how to perform online node classification on this kind of streaming networks (a.k.a. online learning on streaming networks). As the existence of noisy links may negatively affect the node classification performance, we first present an online network embedding algorithm to alleviate this problem by obtaining the embedding representation of new nodes on the fly. Then we feed the learned embedding representation into a novel online soft margin kernel learning algorithm to predict the node labels in a sequential manner. Theoretical analysis is presented to show the superiority of the proposed framework of online learning on streaming networks (OLSN). Extensive experiments on real-world networks further demonstrate the effectiveness and efficiency of the proposed OLSN framework.  相似文献   

15.
The notion of forgetting, also known as variable elimination, has been investigated extensively in the context of classical logic, but less so in (nonmonotonic) logic programming and nonmonotonic reasoning. The few approaches that exist are based on syntactic modifications of a program at hand. In this paper, we establish a declarative theory of forgetting for disjunctive logic programs under answer set semantics that is fully based on semantic grounds. The suitability of this theory is justified by a number of desirable properties. In particular, one of our results shows that our notion of forgetting can be entirely captured by classical forgetting. We present several algorithms for computing a representation of the result of forgetting, and provide a characterization of the computational complexity of reasoning from a logic program under forgetting. As applications of our approach, we present a fairly general framework for resolving conflicts in inconsistent knowledge bases that are represented by disjunctive logic programs, and we show how the semantics of inheritance logic programs and update logic programs from the literature can be characterized through forgetting. The basic idea of the conflict resolution framework is to weaken the preferences of each agent by forgetting certain knowledge that causes inconsistency. In particular, we show how to use the notion of forgetting to provide an elegant solution for preference elicitation in disjunctive logic programming.  相似文献   

16.
在持续学习多任务过程中,持续零样本学习旨在积累已见类知识,并用于识别未见类样本.然而,在连续学习过程中容易产生灾难性遗忘,因此,文中提出基于潜层向量对齐的持续零样本学习算法.基于交叉分布对齐变分自编码器网络框架,将当前任务与已学任务的视觉潜层向量对齐,增大不同任务潜层空间的相似性.同时,结合选择性再训练方法,提高当前任务模型对已学任务判别能力.针对不同任务,采用已见类视觉-隐向量和未见类语义-隐向量训练独立的分类器,实现零样本图像分类.在4个标准数据集上的实验表明文中算法能有效实现持续零样本识别任务,缓解算法的灾难性遗忘.  相似文献   

17.
In this paper, we present a neural network structure and a fast incremental learning algorithm using this network. The proposed network structure, named evolving logic networks for real-valued inputs (ELN-R), is a data structure for storing and using the knowledge. A distinctive feature of ELN-R is that the previously learned knowledge stored in ELN-R can be used as a kind of building block in constructing new knowledge. Using this feature, the proposed learning algorithm can enhance the stability and plasticity at the same time, and as a result, the fast incremental learning can be realized. The performance of the proposed scheme is shown by a theoretical analysis and an experimental study on two benchmark problems.  相似文献   

18.
Occupancy information is essential to facilitate demand-driven operations of air-conditioning and mechanical ventilation (ACMV) systems. Environmental sensors are increasingly being explored as cost effective and non-intrusive means to obtain the occupancy information. This requires the extraction and selection of useful features from the sensor data. In past works, feature selection has generally been implemented using filter-based approaches. In this work, we introduce the use of wrapper and hybrid feature selection for better occupancy estimation. To achieve a fast computation time, we introduce a ranking-based incremental search in our algorithms, which is more efficient than the exhaustive search used in past works. For wrapper feature selection, we propose the WRANK-ELM, which searches an ordered list of features using the extreme learning machine (ELM) classifier. For hybrid feature selection, we propose the RIG-ELM, which is a filter–wrapper hybrid that uses the relative information gain (RIG) criterion for feature ranking and the ELM for the incremental search. We present experimental results in an office space with a multi-sensory network to validate the proposed algorithms.  相似文献   

19.
自组织增量学习神经网络综述   总被引:1,自引:1,他引:0  
邱天宇  申富饶  赵金熙 《软件学报》2016,27(9):2230-2247
自组织增量学习神经网络SOINN(self-organizing incremental neural network)是一种基于竞争学习的两层神经网络,用于在没有先验知识的情况下对动态输入数据进行在线聚类和拓扑表示,同时,对噪音数据具有较强的鲁棒性.SOINN的增量性,使得它能够发现数据流中出现的新模式并进行学习,同时不影响之前学习的结果.因此,SOINN能够作为一种通用的学习算法应用于各类非监督学习问题中.对SOINN的模型和算法进行相应的调整,可以使其适用于监督学习、联想记忆、基于模式的推理、流形学习等多种学习场景中.SOINN已经在许多领域得到了应用,包括机器人智能、计算机视觉、专家系统、异常检测等.  相似文献   

20.
针对传统机器学习框架下设计智能机器人造成的视觉任务执行时学习主动性差、对不确定情况适应性差、知识与能力扩展性差等问题,立足近年来新提出的认知发育思想,提出一种由视觉陌生度驱动的增量自主式视觉学习算法。算法根据在线主成分分析(PCA)计算视觉陌生度,作为Q学习内部动机,以PCA子空间的更新作为知识的主动学习与积累,并由以视觉陌生度为内部动机的Q学习引导,使得机器人能根据所学知识与所"见"场景的陌生程度来决策下一步如何学习。实验结果表明,该算法具有自主探索与学习性能、主动引导机器人学习新知识的能力,以及在线、增量地获取积累知识并发育其智能的能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号