首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
We have combined competitive and Hebbian learning in a neural network designed to learn and recall complex spatiotemporal sequences. In such sequences, a particular item may occur more than once or the sequence may share states with another sequence. Processing of repeated/shared states is a hard problem that occurs very often in the domain of robotics. The proposed model consists of two groups of synaptic weights: competitive interlayer and Hebbian intralayer connections, which are responsible for encoding respectively the spatial and temporal features of the input sequence. Three additional mechanisms allow the network to deal with shared states: context units, neurons disabled from learning, and redundancy used to encode sequence states. The network operates by determining the current and the next state of the learned sequences. The model is simulated over various sets of robot trajectories in order to evaluate its storage and retrieval abilities; its sequence sampling effects; its robustness to noise and its tolerance to fault.  相似文献   

2.
回环检测对于视觉同步定位和建图(visual simultaneous localization and mapping,VSLAM)系统减小累计误差和重定位具有重要意义。为缩短回环检测在线运行时间,同时满足准确率召回率需求,提出了一种基于宽度自编码器的快速回环检测算法(fast loop closure detection-broad autoencoder,FLCD-BA)。该检测算法改进了宽度学习网络,通过无监督的方式从输入数据中自主学习数据特征,进而运用于回环检测任务。与传统的深度学习方法不同,该网络使用伪逆的岭回归算法求解权重矩阵,通过增量学习的方法实现网络的快速重构,从而避免了整个网络的重复训练。所提算法在三个公开数据集上进行了实验,无须使用GPU设备,且网络的训练时间相比词袋模型以及深度学习的方法有较大缩短。实验结果表明该算法在检测回环时具有较高的准确率和召回率,测试中每帧的平均运行时间仅需21 ms,为视觉SLAM系统的回环检测提供了一种新算法。  相似文献   

3.
社交网络信息已被广泛的应用到传统的推荐上,一定程度上减轻了数据稀疏和冷启动问题.随着表示学习的兴起,出现了利用表示学习进行推荐的算法研究.然而社交网络过大,表示学习可扩展性差,难以在有限内存中进行计算.聚集图通过空间压缩,保留了关键的结构关系,去除次要或噪音的结构数据,便于表示学习能够有效学习图结构,从而更好地找到相似用户进行推荐.首先,利用图聚集算法同时考虑分组间及分组内的结构得到最终的聚集图;其次,在聚集图上计算随机游走的转移概率,然后选择每个具有偏差概率的后继节点并生成节点序列;最后将节点序列输入到skip-gram学习用户的潜在表示,获得节点的表示向量整合其信息到贝叶斯个性化排序模型(BPR)来解决项目排名问题.实验结果表明,该方法相比于社会化贝叶斯个性化排序(SBPR)、协同用户网络嵌入(CUNE)等基线方法在推荐任务中保持时间效率的同时有效提升了准确率、召回率和平均精度均值.  相似文献   

4.
Martín-Smith  P.  Pelayo  F. J.  Ros  E.  Prieto  A. 《Neural Processing Letters》2000,12(3):199-213
A model is presented for a neural network with competitive learning that demonstrates the self-organizing capabilities arising from the inclusion of a simple temporal inhibition mechanism within the neural units. This mechanism consists of the inhibition, for a certain time, of the neuron that generates an action potential; such a process is termed Post_Fire inhibition. The neural inhibition period, or degree of inhibition, and the way it is varied during the learning process, represents a decisive factor in the behaviour of the network, in addition to constituting the main basis for the exploitation of the model. Specifically, we show how Post_Fire inhibition is a simple mechanism that promotes the participation of and cooperation between the units comprising the network; it produces self-organized neural responses that reveal spatio–temporal characteristics of input data. Analysis of the inherent properties of the Post_Fire inhibition and the examples presented show its potential for applications such as vector quantization, clustering, pattern recognition, feature extraction and object segmentation. Finally, it should be noted that the Post_Fire inhibition mechanism is treated here as an efficient abstraction of biologically plausible mechanisms, which simplifies its implementation.  相似文献   

5.
P2P流的识别对于网络的维护与运营都具有重要意义,基于机器学习的流识别技术是目前研究的热点和难点内容,但目前仍然存在着建立分类模型需要大量适用的训练数据、训练数据的标记需要依赖领域专家以及因此而导致的工作量及难度过大和实用性不强等问题,而当前的研究工作很少涉及到这些问题的解决办法。针对这一问题,采用主动学习技术提取少量高质量的训练样本进行建模,并结合SVM分类算法提出了一种基于锦标赛选择的样本筛选方法。实验结果表明,其相对于已有的流识别方法,能够在仅依赖少量高质量训练样本的前提下,保证较高召回率及较低误报率,更适用于现实网络环境。  相似文献   

6.
Many neural network methods such as ML-RBF and BP-MLL have been used for multi-label classification. Recently, extreme learning machine (ELM) is used as the basic elements to handle multi-label classification problem because of its fast training time. Extreme learning machine based auto encoder (ELM-AE) is a novel method of neural network which can reproduce the input signal as well as auto encoder, but it can not solve the over-fitting problem in neural networks elegantly. Introducing weight uncertainty into ELM-AE, we can treat the input weights as random variables following Gaussian distribution and propose weight uncertainty ELM-AE (WuELM-AE). In this paper, a neural network named multi layer ELM-RBF for multi-label learning (ML-ELM-RBF) is proposed. It is derived from radial basis function for multi-label learning (ML-RBF) and WuELM-AE. ML-ELM-RBF firstly stacks WuELM-AE to create a deep network, and then it conducts clustering analysis on samples features of each possible class to compose the last hidden layer. ML-ELM-RBF has achieved satisfactory results on single-label and multi-label data sets. Experimental results show that WuELM-AE and ML-ELM-RBF are effective learning algorithms.  相似文献   

7.
针对传统机器学习算法对于流量分类的瓶颈问题,提出基于一维卷积神经网络模型的应用程序流量分类算法。将网络流量数据集进行数据预处理,去除无关数据字段,并使数据满足卷积神经网络的输入特性。设计了一种新的一维卷积神经网络模型,从网络结构、超参数空间以及参数优化方面入手构造了最优分类模型。该模型通过卷积层自主学习数据特征,解决了传统基于机器学习的流量分类算法中特征选择问题。通过网络公开数据集进行模型测试,相比于传统的一维卷积神经网络模型,所设计的神经网络模型的分类准确率提升了16.4%,总分类时间节省了71.48%。另外在类精度、召回率以及[F1]分数方面都有较好的提升。  相似文献   

8.
We consider the hypothesis that systems learning aspects of visual perception may benefit from the use of suitably designed developmental progressions during training. Four models were trained to estimate motion velocities in sequences of visual images. Three of the models were developmental models in the sense that the nature of their visual input changed during the course of training. These models received a relatively impoverished visual input early in training, and the quality of this input improved as training progressed. One model used a coarse-to-multiscale developmental progression (it received coarse-scale motion features early in training and finer-scale features were added to its input as training progressed), another model used a fine-to-multiscale progression, and the third model used a random progression. The final model was nondevelopmental in the sense that the nature of its input remained the same throughout the training period. The simulation results show that the coarse-to-multiscale model performed best. Hypotheses are offered to account for this model's superior performance, and simulation results evaluating these hypotheses are reported. We conclude that suitably designed developmental sequences can be useful to systems learning to estimate motion velocities. The idea that visual development can aid visual learning is a viable hypothesis in need of further study.  相似文献   

9.
自组织增量学习神经网络综述   总被引:1,自引:1,他引:0  
邱天宇  申富饶  赵金熙 《软件学报》2016,27(9):2230-2247
自组织增量学习神经网络SOINN(self-organizing incremental neural network)是一种基于竞争学习的两层神经网络,用于在没有先验知识的情况下对动态输入数据进行在线聚类和拓扑表示,同时,对噪音数据具有较强的鲁棒性.SOINN的增量性,使得它能够发现数据流中出现的新模式并进行学习,同时不影响之前学习的结果.因此,SOINN能够作为一种通用的学习算法应用于各类非监督学习问题中.对SOINN的模型和算法进行相应的调整,可以使其适用于监督学习、联想记忆、基于模式的推理、流形学习等多种学习场景中.SOINN已经在许多领域得到了应用,包括机器人智能、计算机视觉、专家系统、异常检测等.  相似文献   

10.
一类反馈过程神经元网络模型及学习算法研究   总被引:1,自引:1,他引:0  
提出了一种带有反馈输入的过程式神经元网络模型,模型为三层结构,其隐层和输出层均为过程神经元。输入层完成连续信号的输入,隐层完成输入信号的空间聚合和向输出层逐点映射,并将输出信号逐点反馈到输入层;输出层完成隐层输出信号的时、空聚合运算和系统输出。在对权函数实施正交基展开的基础上给出了该模型的学习算法。仿真实验证明了该模型的有效性和可行性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号