首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
提出了一种二维模式识别的神经网络模型ART-Cognizer,该模型采用新认知机的 结构和自适应共振网络的自顶向下的注意和匹配机制,因此对模式在视野中的缩放和平移有 容忍能力,并对自组织学习过程有自稳定能力,它可以在线学习新的模式而不损伤其原有记 忆.  相似文献   

2.
针对资源分配网络(RAN)算法存在隐含层节点受初始学习数据影响大、收敛速度低等问题,提出一种新的RAN学习算法。通过均值算法确定初始隐含层节点,在原有的“新颖性准则”基础上增加RMS窗口,更好地判定隐含层节点是否增加。同时,采用最小均方(LMS)算法与扩展卡尔曼滤波器(EKF)算法相结合调整网络参数,提高算法学习速度。由于基于词向量空间文本模型很难处理文本的高维特性和语义复杂性,为此通过语义特征选取方法对文本输入空间进行语义特征的抽取和降维。实验结果表明,新的RAN学习算法具有学习速度快、网络结构紧凑、分类效果好的优点,而且,在语义特征选取的同时实现了降维,大幅度减少文本分类时间,有效提高了系统分类准确性。  相似文献   

3.
Aoki T  Aoyagi T 《Neural computation》2007,19(10):2720-2738
Although context-dependent spike synchronization among populations of neurons has been experimentally observed, its functional role remains controversial. In this modeling study, we demonstrate that in a network of spiking neurons organized according to spike-timing-dependent plasticity, an increase in the degree of synchrony of a uniform input can cause transitions between memorized activity patterns in the order presented during learning. Furthermore, context-dependent transitions from a single pattern to multiple patterns can be induced under appropriate learning conditions. These findings suggest one possible functional role of neuronal synchrony in controlling the flow of information by altering the dynamics of the network.  相似文献   

4.
In this study we propose an improved learning algorithm based on resource allocating network (RAN) for text categorization. RAN is a promising neural network of single hidden layer structure based on radial basis function. We firstly use the means clustering-based method to determine the initial centers in the hidden layer. Such method can effectively overcome the limitation of local-optimal of clustering algorithms. Subsequently, in order to improve the novelty criteria of RAN, we propose a root mean square (RMS) sliding window method which can reduce the underlying influence of undesirable noise data. Through the further research on the learning process of RAN, we divide the learning process of RAN into a preliminary study phase and a subsequent study phase. The former phase initializes the preliminary structure of RAN and decreases the complexity of network, while the latter phase refines its learning ability and improves the classification accuracy. Such a compact network structure decreases the computational complexity and maintains the higher convergence rate. Moreover, a latent semantic feature selection method is utilized to organize documents. This method reduces the input scale of network, and reveals the latent semantics between features. Extensive experiments are conducted on two benchmark datasets, and the results demonstrate the superiority of our algorithm in comparison with state of the art text categorization algorithms.  相似文献   

5.
Model-based learning systems such as neural networks usually “forget” learned skills due to incremental learning of new instances. This is because the modification of a parameter interferes with old memories. Therefore, to avoid forgetting, incremental learning processes in these learning systems must include relearning of old instances. The relearning process, however, is time-consuming. We present two types of incremental learning method designed to achieve quick adaptation with low resources. One approach is to use a sleep phase to provide time for learning. The other one involves a “meta-learning module” that acquires learning skills through experience. The system carries out “reactive modification” of parameters not only to memorize new instances, but also to avoid forgetting old memories using a meta-learning module.This work was presented, in part, at the 9th International Symposium on Artificial Life and Robotics, Oita, Japan, January 28–30, 2004  相似文献   

6.
A realtime online learning system with capacity limits needs to gradually forget old information in order to avoid catastrophic forgetting. This can be achieved by allowing new information to overwrite old, as in a so-called palimpsest memory. This paper describes an incremental learning rule based on the Bayesian confidence propagation neural network that has palimpsest properties when employed in an attractor neural network. The network does not suffer from catastrophic forgetting, has a capacity dependent on the learning time constant and exhibits faster convergence for newer patterns.  相似文献   

7.
Senn W  Fusi S 《Neural computation》2005,17(10):2106-2138
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.  相似文献   

8.
A memory capacity exists for artificial neural networks of associative memory. The addition of new memories beyond the capacity overloads the network system and makes all learned memories irretrievable (catastrophic forgetting) unless there is a provision for forgetting old memories. This article describes a property of associative memory networks in which a number of units are replaced when networks learn. In our network, every time the network learns a new item or pattern, a number of units are erased and the same number of units are added. It is shown that the memory capacity of the network depends on the number of replaced units, and that there exists a optimal number of replaced units in which the memory capacity is maximized. The optimal number of replaced units is small, and seems to be independent of the network size. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   

9.
In this paper, we propose a new method called information enhancement to interpret internal representations of competitive learning. We consider competitive learning as a process of mutual information maximisation on input patterns. Then, we examine to what extent this mutual information can be increased or decreased by focusing upon or enhancing some elements in a network. If this enhancement for the elements increases information on input patterns, these elements possess more information on input patterns. Thus, we only have to carefully examine those elements in a network. We applied the method to an artificial problem, the Iris problem and an air pollution problem. In all problems, we succeeded in extracting important features in patterns. In addition, final maps were better than those obtained by the conventional self-organising map. We can say that this is the first step towards the full understanding of internal representations in competitive learning.  相似文献   

10.
Incremental backpropagation learning networks   总被引:2,自引:0,他引:2  
How to learn new knowledge without forgetting old knowledge is a key issue in designing an incremental-learning neural network. In this paper, we present a new incremental learning method for pattern recognition, called the "incremental backpropagation learning network", which employs bounded weight modification and structural adaptation learning rules and applies initial knowledge to constrain the learning process. The viability of this approach is demonstrated for classification problems including the iris and the promoter domains.  相似文献   

11.
为解决传统频繁模式挖掘算法效率不高的问题,提出了一种改进的基于FP-tree (Frequent pattern tree)的Apriori频繁模式挖掘算法.首先,在Apriori算法的连接步加入连接预处理过程;其次,对CP-tree (Compact Pattern tree)进行扩展,构造了一个新的树结构ECP-tree (Extension of Compact Pattern tree),新的树结构只需对数据库进行一次扫描就能构造出一棵紧凑的前缀树,且支持交互式挖掘与增量挖掘;然后,将改进点与APFT算法结合,用于挖掘频繁模式;最后,使用UCI数据库中两个数据集进行实验.实验结果表明:改进算法具有较高的挖掘效率,频繁模式挖掘速度显著提升.  相似文献   

12.
一种RBF网络结构调整的稳健增量学习方法   总被引:1,自引:1,他引:0  
刘建军  胡卫东  郁文贤 《计算机仿真》2009,26(7):192-194,227
以实现RBF网络的增量学习能力和提高其增量学习的稳健性为目的,给出了一种RBF网络增量学习算法.算法首先对初始数据集进行聚类得到初始的RBF网络结构,然后采用GAP-RBF算法中的隐层节点调整策略动态调整网络结构实现RBF网络增量学习.RBF网络的初始化降低了初始数据集样本训练顺序对RBF网络性能的影响,增强了其增量学习的稳健性.IRIS数据集和雷达实测数据集仿真实验表明,算法具有较好的增量学习能力.  相似文献   

13.
Prototype classifiers have been studied for many years. However, few methods can realize incremental learning. On the other hand, most prototype classifiers need users to predetermine the number of prototypes; an improper prototype number might undermine the classification performance. To deal with these issues, in the paper we propose an online supervised algorithm named Incremental Learning Vector Quantization (ILVQ) for classification tasks. The proposed method has three contributions. (1) By designing an insertion policy, ILVQ incrementally learns new prototypes, including both between-class incremental learning and within-class incremental learning. (2) By employing an adaptive threshold scheme, ILVQ automatically learns the number of prototypes needed for each class dynamically according to the distribution of training data. Therefore, unlike most current prototype classifiers, ILVQ needs no prior knowledge of the number of prototypes or their initial value. (3) A technique for removing useless prototypes is used to eliminate noise interrupted into the input data. Results of experiments show that the proposed ILVQ can accommodate the incremental data environment and provide good recognition performance and storage efficiency.  相似文献   

14.
针对深度增量学习可能产生灾难性遗忘的问题,提出双分支迭代的深度增量图像分类方法,使用主网络存储旧类知识,分支网络学习增量数据中的新类知识,并在增量过程中使用主网络的权重优化分支网络的参数.使用基于密度峰值聚类的方法从迭代数据集中筛选典型样本并构建保留集,并加入增量迭代训练中,减轻灾难性遗忘.实验表明,文中方法的性能较优.  相似文献   

15.
连续学习混沌神经网络的研究   总被引:1,自引:1,他引:1  
近几年混沌神经网络在信息处理,特别是联想记忆中的应用得到了极大重视。本文提出了一个改进的连续学习混沌神经网络(MSLCNN)模型,它具有两个重要特征:(1)根据不同的输入,神经网络做出不同的响应,可从已知模式来识别未知模式;(2)可连续学习未知模式。计算机仿真表明我们的模型具有应用潜力。  相似文献   

16.
Online learning control by association and reinforcement   总被引:4,自引:0,他引:4  
This paper focuses on a systematic treatment for developing a generic online learning control system based on the fundamental principle of reinforcement learning or more specifically neural dynamic programming. This online learning system improves its performance over time in two aspects: 1) it learns from its own mistakes through the reinforcement signal from the external environment and tries to reinforce its action to improve future performance; and 2) system states associated with the positive reinforcement is memorized through a network learning process where in the future, similar states will be more positively associated with a control action leading to a positive reinforcement. A successful candidate of online learning control design is introduced. Real-time learning algorithms is derived for individual components in the learning system. Some analytical insight is provided to give guidelines on the learning process took place in each module of the online learning control system.  相似文献   

17.
We present a new learning algorithm that leverages oscillations in the strength of neural inhibition to train neural networks. Raising inhibition can be used to identify weak parts of target memories, which are then strengthened. Conversely, lowering inhibition can be used to identify competitors, which are then weakened. To update weights, we apply the Contrastive Hebbian Learning equation to successive time steps of the network. The sign of the weight change equation varies as a function of the phase of the inhibitory oscillation. We show that the learning algorithm can memorize large numbers of correlated input patterns without collapsing and that it shows good generalization to test patterns that do not exactly match studied patterns.  相似文献   

18.
如何决定人工神经网络的适当规模,以往都是通过试探法实现,不但费时,而且无规律可循。文中基于神经网络的基本学习算法,构筑动态网络结构,使之更符合抽取的新的输入、输出特性。讨论了构筑动态神经网络的一种途径。学习是从最简单的网络(无隐含单元)开始,新的单元一步一步补充,直到网络给出一个满意的模拟值。  相似文献   

19.
Fine-grained image classification is a challenging research topic because of the high degree of similarity among categories and the high degree of dissimilarity for a specific category caused by different poses and scales. A cultural heritage image is one of the fine-grained images because each image has the same similarity in most cases. Using the classification technique, distinguishing cultural heritage architecture may be difficult. This study proposes a cultural heritage content retrieval method using adaptive deep learning for fine-grained image retrieval. The key contribution of this research was the creation of a retrieval model that could handle incremental streams of new categories while maintaining its past performance in old categories and not losing the old categorization of a cultural heritage image. The goal of the proposed method is to perform a retrieval task for classes. Incremental learning for new classes was conducted to reduce the re-training process. In this step, the original class is not necessary for re-training which we call an adaptive deep learning technique. Cultural heritage in the case of Thai archaeological site architecture was retrieved through machine learning and image processing. We analyze the experimental results of incremental learning for fine-grained images with images of Thai archaeological site architecture from world heritage provinces in Thailand, which have a similar architecture. Using a fine-grained image retrieval technique for this group of cultural heritage images in a database can solve the problem of a high degree of similarity among categories and a high degree of dissimilarity for a specific category. The proposed method for retrieving the correct image from a database can deliver an average accuracy of 85 percent. Adaptive deep learning for fine-grained image retrieval was used to retrieve cultural heritage content, and it outperformed state-of-the-art methods in fine-grained image retrieval.  相似文献   

20.
In this paper we describe an algorithm designed for learning perceptual organization of an autonomous agent. The learning algorithm performs incremental clustering of a perceptual input under reward. The distribution of the input samples is modeled by a Gaussian mixture density, which serves as a state space for the policy learning algorithm. The agent learns to select actions in response to the presented stimuli simultaneously with estimating the parameters of the input mixture density. The feedback from the environment is given to the agent in the form of a scalar value, or a reward, which represents the utility of a particular clustering configuration for the action selection. The setting of the learning task makes it impossible to use supervised or partially supervised techniques to estimate the parameters of the input density. The paper introduces the notion of weak transduction and shows a solution to it using an EM-based framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号