首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 171 毫秒
1.
杨彪  潘炼 《工矿自动化》2013,39(6):66-69
针对传统的忆阻器模型存在不能很好地与HP实验室提出的忆阻器物理模型中忆阻器的阻值变化特点相符的问题,提出了一种改进的带有阈值电压的忆阻器模型,该模型能很好地模拟忆阻器的"激活"现象,其特性与HP实验室的忆阻器物理模型相符;基于该改进模型设计了一种高通滤波器电路,该电路通过改变忆阻器阻值控制电路的输出信号来改变忆阻器的阻值,从而实现了滤波器截止频率的调节。SPICE仿真结果验证了设计的正确性。  相似文献   

2.
为了分析参数对忆阻器忆阻值的影响,通过搭建忆阻器的Matlab仿真模型,运用控制变量法,分别设置外加激励幅值为1,2V,电压频率为1,2Hz,忆阻器横截面积为10,25 μm2,忆阻器长度为10,20 nm,对不同条件不同参数的忆阻器模型进行了大量仿真分析.计算各伏安特性曲线和忆阻值的具体变化范围,通过对仿真数据的统一比较分析得出了不同情况下忆阻值的变化规律.  相似文献   

3.
突触传递是温度敏感的,由于缺乏温度依赖性的突触电导分析模型,无法在神经系统建模时包括温度效应.忆阻器因其阻值连续可变和纳米尺寸的优势,被广泛认为可以模拟生物突触.本文通过改进忆阻保留值和考虑温度对离子迁移和扩散的影响,提出一种新的氧化钨忆阻器模型,此模型更加符合忆阻器的实际行为特性.首先,改进的数学模型不仅具有原模型的功能,同时可以拟合忆阻器的实际遗忘规律.另外,将此忆阻器作为生物突触耦合两个相同的HH神经元,能够体现温度对突触传递的影响,即温度上升引起氧空位迁移和扩散速率发生变化,导致忆阻器电导变化速率加快,进一步影响兴奋性突触后膜电位幅值和放电次数,而相关仿真结果与神经生理实验现象相符.本文的工作表明,改进的氧化钨忆阻器模型更适合作为仿生突触应用到神经形态系统中,将为指导忆阻器的设计制造工艺以提高其仿生突触性能提供参考,也为研究温度对突触传递的影响提供了一种新思路.  相似文献   

4.
针对基于Hopfield神经网络的最大频繁项集挖掘(HNNMFI)算法存在的挖掘结果不准确的问题,提出基于电流阈值自适应忆阻器(TEAM)模型的Hopfield神经网络的改进关联规则挖掘算法。首先,使用TEAM模型设计实现突触,利用阈值忆阻器的忆阻值随方波电压连续变化的能力来设定和更新突触权值,自适应关联规则挖掘算法的输入。其次,改进原算法的能量函数以对齐标准能量函数,并用忆阻值表示权值,放大权值和偏置。最后,设计由最大频繁项集生成关联规则的算法。使用10组大小在30以内的随机事务集进行1000次仿真实验,实验结果表明,与HNNMFI算法相比,所提算法在关联挖掘结果准确率上提高33.9个百分点以上,说明忆阻器能够有效提高Hopfield神经网络在关联规则挖掘中的结果准确率。  相似文献   

5.
由于忆阻器交叉阵列自身的模拟特性可高效实现乘累加运算,因此,它被广泛用于构建神经形态计算系统的硬件加速器.然而,纳米线电阻的存在,会引起忆阻器与纳米线构成的电阻网络出现电压降问题,导致忆阻器阵列的输出信号损失而影响神经网络的精度.分析忆阻器电压降与忆阻器状态、位置,输出电流和输出位置的关系,通过稀疏映射优化电压降,并采用输出补偿进一步提高输出精度.仿真实验的结果表明,该方法可以有效地解决电压降引起的问题,忆阻神经网络在手写数字数据集MNIST的识别率达到95.8%,较优化前提升了33.5%.  相似文献   

6.
利用4个相同忆容器构建一个能实现零、正和负突触权重的忆容桥电路。在附加3个晶体三极管后,忆容桥权重电路能够实现神经细胞的突触操作。由于整个操作都是基于脉冲输入信号,因此整个电路是高效节能的。通过Matlab实现突触权重设计和突触权重乘法的模拟。仿真实验结果表明,基于线性忆容桥的突触电路在性能上与忆阻突触桥电路基本相当,优于传统突触乘法电路。  相似文献   

7.
忆阻器交叉阵列及在图像处理中的应用   总被引:2,自引:0,他引:2  
忆阻器是一种有记忆功能的非线性电阻,其阻值的变化依赖于流过它的电荷数量或磁通量.忆阻器作为第4个基本的电路元件,在众多领域中有巨大的应用潜力,有望推动整个电路理论的变革.文中利用数值仿真和电路建模,分析了忆阻器的理论基础和特性,提出了一种用于图像存储的忆阻器交叉阵列,可以实现黑白、灰度和彩色图像的存储和输出,一系列的计...  相似文献   

8.
设计一个具有斜8字型伏安特性的忆阻器模拟电路模型,并将此模型应用于构建低通滤波电路。进行Multisim仿真并制作了相应的实物电路,仿真和实验结果表明该电路模型可以正确模拟忆阻器的特性,由其构建的忆阻低通滤波电路具有时变特性。  相似文献   

9.
基于MNIST的忆阻神经网络稳定性研究   总被引:1,自引:0,他引:1  
为了探究忆阻器的稳定性问题对忆阻神经网络性能的影响,基于等效电阻拓扑结构的忆阻器模型,搭建了一个将忆阻器作为突触的BP神经网络,并利用MNIST数据集对该网络进行训练和测试。忆阻器的稳定性问题通过设置忆阻器参数波动来模拟,最终发现忆阻器一定程度内的性能波动会促进神经网络的收敛,但波动过大则会降低网络的收敛速度。为了表征波动的临界程度,测得了基于忆阻器模型的各参数的最大波动范围,并进一步计算出忆阻器件工艺层次参量的取值范围,为忆阻神经网络硬件化中忆阻器件的工艺制造和选用提供了参考。  相似文献   

10.
研发动态     
有学习能力的纳米忆阻器元件近年来,忆阻器(memristor)被视为突触的电子孪生兄弟.通常,一个神经细胞经由数千个突触与其他神经细胞相连.忆阻器像突触一样可以记住早期脉冲,像神经元一样,只有当某一脉冲超过一定阈值时才对这一脉冲作出反应.由于忆阻器具有类似突触的这些性质,能利用忆阻器模拟大脑学习过程,特别适合用于制造极其省电而又结实耐用具有学习能力的处理器.  相似文献   

11.

Memory being one of the essential credential in today’s computer world seeks forward newer research interests in its types. Hopfield neural networks of artificial neural networks are one of its classes that can be modelled to form an associative memory. In this paper, we have shown the Hopfield neural network constructed with spintronic memristor bridges accounting to act as an associative memory unit. The memristors are nanoscaled, in terms of size, which possess synaptic behaviour in the artificial neuromorphic system. The associative behaviour is realised by the updation of synaptic weights of memristive Hopfield with single- and multiple-bit associativity which is simulated in MATLAB. The application of the hardware in the field of cryptography is also proposed.

  相似文献   

12.
Neural systems as nonlinear filters   总被引:1,自引:0,他引:1  
Maass W  Sontag ED 《Neural computation》2000,12(8):1743-1772
Experimental data show that biological synapses behave quite differently from the symbolic synapses in all common artificial neural network models. Biological synapses are dynamic; their "weight" changes on a short timescale by several hundred percent in dependence of the past input to the synapse. In this article we address the question how this inherent synaptic dynamics (which should not be confused with long term learning) affects the computational power of a neural network. In particular, we analyze computations on temporal and spatiotemporal patterns, and we give a complete mathematical characterization of all filters that can be approximated by feedforward neural networks with dynamic synapses. It turns out that even with just a single hidden layer, such networks can approximate a very rich class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics. Our characterization result provides for all nonlinear filters that are approximable by Volterra series a new complexity hierarchy related to the cost of implementing such filters in neural systems.  相似文献   

13.
A distributed and locally reprogrammable address–event receiver has been designed, in which incoming address–events are monitored simultaneously by all synapses, allowing for arbitrarily large axonal fan-out without reducing channel capacity. Synapses can change the address of their presynaptic neuron, allowing the distributed implementation of a biologically realistic learning rule, with both synapse formation and elimination (synaptic rewiring). Probabilistic synapse formation leads to topographic map development, made possible by a cross-chip current-mode calculation of Euclidean distance. As well as synaptic plasticity in rewiring, synapses change weights using a competitive Hebbian learning rule (spike-timing-dependent plasticity). The weight plasticity allows receptive fields to be modified based on spatio–temporal correlations in the inputs, and the rewiring plasticity allows these modifications to become embedded in the network topology.   相似文献   

14.
Synapses play a central role in neural computation: the strengths of synaptic connections determine the function of a neural circuit. In conventional models of computation, synaptic strength is assumed to be a static quantity that changes only on the slow timescale of learning. In biological systems, however, synaptic strength undergoes dynamic modulation on rapid timescales through mechanisms such as short term facilitation and depression. Here we describe a general model of computation that exploits dynamic synapses, and use a backpropagation-like algorithm to adjust the synaptic parameters. We show that such gradient descent suffices to approximate a given quadratic filter by a rather small neural system with dynamic synapses. We also compare our network model to artificial neural networks designed for time series processing. Our numerical results are complemented by theoretical analyses which show that even with just a single hidden layer such networks can approximate a surprisingly large class of nonlinear filters: all filters that can be characterized by Volterra series. This result is robust with regard to various changes in the model for synaptic dynamics.  相似文献   

15.
In most neural network models, synapses are treated as static weights that change only with the slow time scales of learning. It is well known, however, that synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers the release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. We consider a simple model for dynamic stochastic synapses that can easily be integrated into common models for networks of integrate-and-fire neurons (spiking neurons). The parameters of this model have direct interpretations in terms of synaptic physiology. We investigate the consequences of the model for computing with individual spikes and demonstrate through rigorous theoretical results that the computational power of the network is increased through the use of dynamic synapses.  相似文献   

16.
A new methodology for neural learning is presented. Only a single iteration is needed to train a feed-forward network with near-optimal results. This is achieved by introducing a key modification to the conventional multi-layer architecture. A virtual input layer is implemented, which is connected to the nominal input layer by a special nonlinear transfer function, and to the first hidden layer by regular (linear) synapses. A sequence of alternating direction singular value decompositions is then used to determine precisely the inter-layer synaptic weights. This computational paradigm exploits the known separability of the linear (inter-layer propagation) and nonlinear (neuron activation) aspects of information transfer within a neural network. Examples show that the trained neural networks generalize well.  相似文献   

17.
Hardware realization is very important when considering wider applications of neural networks (NNs). In particular, hardware NNs with a learning ability are intriguing. In these networks, the learning scheme is of much interest, with the backpropagation method being widely used. A gradient type of learning rule is not easy to realize in an electronic system, since calculation of the gradients for all weights in the network is very difficult. More suitable is the simultaneous perturbation method, since the learning rule requires only forward operations of the network to modify weights unlike the backpropagation method. In addition, pulse density NN systems have some promising properties, as they are robust to noisy situations and can handle analog quantities based on the digital circuits. We describe a field-programmable gate array realization of a pulse density NN using the simultaneous perturbation method as the learning scheme. We confirm the viability of the design and the operation of the actual NN system through some examples.  相似文献   

18.
We study pulse-coupled neural networks that satisfy only two assumptions: each isolated neuron fires periodically, and the neurons are weakly connected. Each such network can be transformed by a piece-wise continuous change of variables into a phase model, whose synchronization behavior and oscillatory associative properties are easier to analyze and understand. Using the phase model, we can predict whether a given pulse-coupled network has oscillatory associative memory, or what minimal adjustments should be made so that it can acquire memory. In the search for such minimal adjustments we obtain a large class of simple pulse-coupled neural networks that ran memorize and reproduce synchronized temporal patterns the same way a Hopfield network does with static patterns. The learning occurs via modification of synaptic weights and/or synaptic transmission delays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号