首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Recent machine learning challenges require the capability of learning in non-stationary environments. These challenges imply the development of new algorithms that are able to deal with changes in the underlying problem to be learnt. These changes can be gradual or trend changes, abrupt changes and recurring contexts. As the dynamics of the changes can be very different, existing machine learning algorithms exhibit difficulties to cope with them. Several methods using, for instance, ensembles or variable length windowing have been proposed to approach this task.In this work we propose a new method, for single-layer neural networks, that is based on the introduction of a forgetting function in an incremental online learning algorithm. This forgetting function gives a monotonically increasing importance to new data. Due to the combination of incremental learning and increasing importance assignment the network forgets rapidly in the presence of changes while maintaining a stable behavior when the context is stationary.The performance of the method has been tested over several regression and classification problems and its results compared with those of previous works. The proposed algorithm has demonstrated high adaptation to changes while maintaining a low consumption of computational resources.  相似文献   

2.
In this article, a new neural network model is presented for incremental learning tasks where networks are required to learn new knowledge without forgetting the old. An essential core of the proposed network structure is their dynamic and spatial changing connection weights (DSCWs). A learning scheme is developed for the formulation of the dynamic changing weights, while a structural adaptation is formulated by the spatial changing connecting weights. To avoid disturbing the old knowledge by the creation of new connections, a restoration mechanism is introduced dusing the DSCWs. The usefulness of the proposed model is demonstrated by using a system identification task. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002.  相似文献   

3.
Ke  Minlong  Fernanda L.  Xin   《Neurocomputing》2009,72(13-15):2796
Negative correlation learning (NCL) is a successful approach to constructing neural network ensembles. In batch learning mode, NCL outperforms many other ensemble learning approaches. Recently, NCL has also shown to be a potentially powerful approach to incremental learning, while the advantages of NCL have not yet been fully exploited. In this paper, we propose a selective NCL (SNCL) algorithm for incremental learning. Concretely, every time a new training data set is presented, the previously trained neural network ensemble is cloned. Then the cloned ensemble is trained on the new data set. After that, the new ensemble is combined with the previous ensemble and a selection process is applied to prune the whole ensemble to a fixed size. This paper is an extended version of our preliminary paper on SNCL. Compared to the previous work, this paper presents a deeper investigation into SNCL, considering different objective functions for the selection process and comparing SNCL to other NCL-based incremental learning algorithms on two more real world bioinformatics data sets. Experimental results demonstrate the advantage of SNCL. Further, comparisons between SNCL and other existing incremental learning algorithms, such Learn++ and ARTMAP, are also presented.  相似文献   

4.
史达  谭少华 《控制与决策》2010,25(6):925-928
提出一种混合式贝叶斯网络结构增量学习算法.首先提出多项式时间的限制性学习技术,为每个变量建立候选父节点集合;然后,依据候选父节点集合,利用搜索技术对当前网络进行增量学习.该算法的复杂度显著低于目前最优的贝叶斯网络增量学习算法.理论与实验均表明,所处理的问题越复杂,该算法在计算复杂度方面的优势越明显.  相似文献   

5.
This paper describes a fast training algorithm for feedforward neural nets, as applied to a two-layer neural network to classify segments of speech as voiced, unvoiced, or silence. The speech classification method is based on five features computed for each speech segment and used as input to the network. The network weights are trained using a new fast training algorithm which minimizes the total least squares error between the actual output of the network and the corresponding desired output. The iterative training algorithm uses a quasi-Newtonian error-minimization method and employs a positive-definite approximation of the Hessian matrix to quickly converge to a locally optimal set of weights. Convergence is fast, with a local minimum typically reached within ten iterations; in terms of convergence speed, the algorithm compares favorably with other training techniques. When used for voiced-unvoiced-silence classification of speech frames, the network performance compares favorably with current approaches. Moreover, the approach used has the advantage of requiring no assumption of a particular probability distribution for the input features.  相似文献   

6.
Model-based learning systems such as neural networks usually “forget” learned skills due to incremental learning of new instances. This is because the modification of a parameter interferes with old memories. Therefore, to avoid forgetting, incremental learning processes in these learning systems must include relearning of old instances. The relearning process, however, is time-consuming. We present two types of incremental learning method designed to achieve quick adaptation with low resources. One approach is to use a sleep phase to provide time for learning. The other one involves a “meta-learning module” that acquires learning skills through experience. The system carries out “reactive modification” of parameters not only to memorize new instances, but also to avoid forgetting old memories using a meta-learning module.This work was presented, in part, at the 9th International Symposium on Artificial Life and Robotics, Oita, Japan, January 28–30, 2004  相似文献   

7.
Negative Correlation Learning (NCL) has been successfully applied to construct neural network ensembles. It encourages the neural networks that compose the ensemble to be different from each other and, at the same time, accurate. The difference among the neural networks that compose an ensemble is a desirable feature to perform incremental learning, for some of the neural networks can be able to adapt faster and better to new data than the others. So, NCL is a potentially powerful approach to incremental learning. With this in mind, this paper presents an analysis of NCL, aiming at determining its weak and strong points to incremental learning. The analysis shows that it is possible to use NCL to overcome catastrophic forgetting, an important problem related to incremental learning. However, when catastrophic forgetting is very low, no advantage of using more than one neural network of the ensemble to learn new data is taken and the test error is high. When all the neural networks are used to learn new data, some of them can indeed adapt better than the others, but a higher catastrophic forgetting is obtained. In this way, it is important to find a trade-off between overcoming catastrophic forgetting and using an entire ensemble to learn new data. The NCL results are comparable with other approaches which were specifically designed to incremental learning. Thus, the study presented in this work reveals encouraging results with negative correlation in incremental learning, showing that NCL is a promising approach to incremental learning.
Xin YaoEmail:
  相似文献   

8.
Convex incremental extreme learning machine   总被引:6,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

9.
徐海龙 《控制与决策》2010,25(2):282-286
针对SVM训练学习过程中难以获得大量带有类标注样本的问题,提出一种基于距离比值不确定性抽样的主动SVM增量训练算法(DRB-ASVM),并将其应用于SVM增量训练.实验结果表明,在保证不影响分类精度的情况下,应用主动学习策略的SVM选择的标记样本数量大大低于随机选择的标记样本数量,从而降低了标记的工作量或代价,并且提高了训练速度.  相似文献   

10.
Bin  Xiangyang  Jianping   《Pattern recognition》2007,40(12):3621-3632
In this paper, we propose a robust incremental learning framework for accurate skin region segmentation in real-life images. The proposed framework is able to automatically learn the skin color information from each test image in real-time and generate the specific skin model (SSM) for that image. Consequently, the SSM can adapt to a certain image, in which the skin colors may vary from one region to another due to illumination conditions and inherent skin colors. The proposed framework consists of multiple iterations to learn the SSM, and each iteration comprises two major steps: (1) collecting new skin samples by region growing; (2) updating the skin model incrementally with the available skin samples. After the skin model converges (i.e., becomes the SSM), a post-processing can be further performed to fill up the interstices on the skin map. We performed a set of experiments on a large-scale real-life image database and our method observably outperformed the well-known Bayesian histogram. The experimental results confirm that the SSM is more robust than static skin models.  相似文献   

11.
Many real scenarios in machine learning are of dynamic nature. Learning in these types of environments represents an important challenge for learning systems. In this context, the model used for learning should work in real time and have the ability to act and react by itself, adjusting its controlling parameters, even its structures, depending on the requirements of the process. In a previous work, the authors presented an online learning algorithm for two-layer feedforward neural networks that includes a factor that weights the errors committed in each of the samples. This method is effective in dynamic environments as well as in stationary contexts. As regards this method’s incremental feature, we raise the possibility that the network topology is adapted according to the learning needs. In this paper, we demonstrate and justify the suitability of the online learning algorithm to work with adaptive structures without significantly degrading its performance. The theoretical basis for the method is given and its performance is illustrated by means of its application to different system identification problems. The results confirm that the proposed method is able to incorporate units to its hidden layer, during the learning process, without high performance degradation.  相似文献   

12.
13.
An approach to learning mobile robot navigation   总被引:1,自引:0,他引:1  
This paper describes an approach to learning an indoor robot navigation task through trial-and-error. A mobile robot, equipped with visual, ultrasonic and laser sensors, learns to servo to a designated target object. In less than ten minutes of operation time, the robot is able to navigate to a marked target object in an office environment. The central learning mechanism is the explanation-based neural network learning algorithm (EBNN). EBNN initially learns function purely inductively using neural network representations. With increasing experience, EBNN employs domain knowledge to explain and to analyze training data in order to generalize in a more knowledgeable way. Here EBNN is applied in the context of reinforcement learning, which allows the robot to learn control using dynamic programming.  相似文献   

14.
Transfer learning methods have been successfully applied into many fields for solving the problem of performance degradation in evolving working conditions or environments. This paper expands the range of transfer learning application by designing an integrated approach for fault diagnostics with different kinds of components. We use two deep learning methods, Convolutional Neural Network (CNN) and Multi-layer Perceptron (MLP), to train several base models with a mount of source data. Then the base models are transferred to target data with different level of variations, including the variations of working load and component type. Case Western Reserve University bearing dataset and 2009 PHM Data Challenge gearbox dataset are used to validate the performance of proposed approach. Experimental results show that proposed approach can improve the diagnostic accuracy not only between the working conditions from the same component but also different components.  相似文献   

15.
A scalable, incremental learning algorithm for classification problems   总被引:5,自引:0,他引:5  
In this paper a novel data mining algorithm, Clustering and Classification Algorithm-Supervised (CCA-S), is introduced. CCA-S enables the scalable, incremental learning of a non-hierarchical cluster structure from training data. This cluster structure serves as a function to map the attribute values of new data to the target class of these data, that is, classify new data. CCA-S utilizes both the distance and the target class of training data points to derive the cluster structure. In this paper, we first present problems with many existing data mining algorithms for classification problems, such as decision trees, artificial neural networks, in scalable and incremental learning. We then describe CCA-S and discuss its advantages in scalable, incremental learning. The testing results of applying CCA-S to several common data sets for classification problems are presented. The testing results show that the classification performance of CCA-S is comparable to the other data mining algorithms such as decision trees, artificial neural networks and discriminant analysis.  相似文献   

16.
In this paper a new learning algorithm is proposed for the problem of simultaneous learning of a function and its derivatives as an extension of the study of error minimized extreme learning machine for single hidden layer feedforward neural networks. Our formulation leads to solving a system of linear equations and its solution is obtained by Moore-Penrose generalized pseudo-inverse. In this approach the number of hidden nodes is automatically determined by repeatedly adding new hidden nodes to the network either one by one or group by group and updating the output weights incrementally in an efficient manner until the network output error is less than the given expected learning accuracy. For the verification of the efficiency of the proposed method a number of interesting examples are considered and the results obtained with the proposed method are compared with that of other two popular methods. It is observed that the proposed method is fast and produces similar or better generalization performance on the test data.  相似文献   

17.
One important issue related to the implementation of cellular manufacturing systems (CMSs) is to decide whether to convert an existing job shop into a CMS comprehensively in a single run, or in stages incrementally by forming cells one after the other, taking the advantage of the experiences of implementation. This paper presents a new multi-objective nonlinear programming model in a dynamic environment. Furthermore, a novel hybrid multi-objective approach based on the genetic algorithm and artificial neural network is proposed to solve the presented model. From the computational analyses, the proposed algorithm is found much more efficient than the fast non-dominated sorting genetic algorithm (NSGA-II) in generating Pareto optimal fronts.  相似文献   

18.
This paper reports on studies to overcome difficulties associated with setting the learning rates of backpropagation neural networks by using fuzzy logic. Building on previous research, a fuzzy control system is designed which is capable of dynamically adjusting the individual learning rates of both hidden and output neurons, and the momentum term within a back-propagation network. Results show that the fuzzy controller not only eliminates the effort of configuring a global learning rate, but also increases the rate of convergence in comparison with a conventional backpropagation network. Comparative studies are presented for a number of different network configurations. The paper also presents a brief overview of fuzzy logic and backpropagation learning, highlighting how the two paradigms can enhance each other.  相似文献   

19.
Recently, various control methods represented by proportional-integral-derivative (PID) control are used for robotic control. To cope with the requirements for high response and precision, advanced feedforward controllers such as gravity compensator, Coriolis/centrifugal force compensator and friction compensators have been built in the controller. Generally, it causes heavy computational load when calculating the compensating value within a short sampling period. In this paper, integrated recurrent neural networks are applied as a feedforward controller for PUMA560 manipulator. The feedforward controller works instead of gravity and Coriolis/centrifugal force compensators. In the learning process of the neural network by using back propagation algorithm, the learning coefficient and gain of sigmoid function are tuned intuitively and empirically according to teaching signals. The tuning is complicated because it is being conducted by trial and error. Especially, when the scale of teaching signal is large, the problem becomes crucial. To cope with the problem which concerns the learning performance, a simple and adaptive learning technique for large scale teaching signals is proposed. The learning techniques and control effectiveness are evaluated through simulations using the dynamic model of PUMA560 manipulator.  相似文献   

20.
In the past decade, intelligent transportation systems have emerged as an efficient way of improving transportation services, while machine learning has been the key driver that created scopes for numerous innovations and improvements. Still, most machine learning approaches integrate paradigms that fell short of providing cost-effective and scalable solutions. This work employs long short-term memory to detect congestion by capturing the long-term temporal dependency for short-term public bus travel speed prediction to detect congestion. In contrast to existing methods, we implement our solution as incremental learning that is superior to traditional batch learning, enabling efficient and sustainable congestion detection. We examine the real-world efficacy of our prototype implementation in Pécs, the fifth largest city of Hungary, and observed that the incrementally updated model can detect congestion of up to 82.37%. Additionally, we find our solution to evolve sufficiently over time, implying diverse real-world practicability. The findings emerging from this work can serve as a basis for future improvements to develop better public transportation congestion detection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号