首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
This paper describes concepts that optimize an on-chip learning algorithm for implementation of VLSI neural networks with conventional technologies. The network considered comprises an analog feedforward network with digital weights and update circuitry, although many of the concepts are also valid for analog weights. A general, semi-parallel form of perturbation learning is used to accelerate hidden-layer update while the infinity-norm error measure greatly simplifies error detection. Dynamic gain adaption, coupled with an annealed learning rate, produces consistent convergence and maximizes the effective resolution of the bounded weights. The use of logarithmic analog-to-digital conversion, during the backpropagation phase, obviates the need for digital multipliers in the update circuitry without compromising learning quality. These concepts have been validated through network simulations of continuous mapping problems.  相似文献   

2.
Neural networks require VLSI implementations for on-board systems. Size and real-time considerations show that on-chip learning is necessary for a large range of applications. A flexible digital design is preferred here to more compact analog or optical realizations. As opposed to many current implementations, the two-dimensional systolic array system presented is an attempt to define a novel computer architecture inspired by neurobiology. It is composed of generic building blocks for basic operations rather than predefined neural models. A full custom VLSI design of a first prototype has demonstrated the efficacy of this design. A complete board dedicated to Hopfield's model has been designed using these building blocks. Beyond the very specific application presented, the underlying principles can be used for designing efficient hardware for most neural network models.  相似文献   

3.
Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.  相似文献   

4.
In this work we propose two techniques for improving VLSI implementations for artificial neural networks (ANNs). By making use of two kinds of processing elements (PEs), one dedicated to the basic operations (addition and multiplication) and another to evaluate the activation function, the total time and cost for the VLSI array implementation of ANNs can be decreased by a factor of two compared with previous work. Taking the advantage of residue number system, the efficiency of each PE can be further increased. Two RNS- based array processor designs are proposed. The first is built by look-up tables, and the second is constructed by binary adders accomplished by the mixed- radix conversion (MRC), such that the hardwares are simple and high speed operations are obtained. The proposed techniques are general enough to be extended to cover other forms of loading and learning algorithms.  相似文献   

5.
Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called ;weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented.  相似文献   

6.
Synapses are crucial elements for computation and information transfer in both real and artificial neural systems. Recent experimental findings and theoretical models of pulse-based neural networks suggest that synaptic dynamics can play a crucial role for learning neural codes and encoding spatiotemporal spike patterns. Within the context of hardware implementations of pulse-based neural networks, several analog VLSI circuits modeling synaptic functionality have been proposed. We present an overview of previously proposed circuits and describe a novel analog VLSI synaptic circuit suitable for integration in large VLSI spike-based neural systems. The circuit proposed is based on a computational model that fits the real postsynaptic currents with exponentials. We present experimental data showing how the circuit exhibits realistic dynamics and show how it can be connected to additional modules for implementing a wide range of synaptic properties.  相似文献   

7.
The demands on offsets in analog weight adaptation circuitry are very high for onchip learning feed-forward neural networks using a back-propagation type of learning rule. Exceeding of the specifications for weight adaptation offsets prevents the weights from converging to their optimum, which leads to a significantly degraded learning behavior. This letter presents a circuit, including a tuning system, that minimizes weight adaptation offsets and that can be used to implement analog on-chip back-propagation learning feed-forward neural networks.  相似文献   

8.
In this paper, the implementation of new digital architecture for a multilayer neural network (MNN) with on-chip learning is discussed. The advantage of using the digital approach is that it can use state-of-the-art VLSI and ULSI implementation techniques. One of the major hard-ware problems in implementing a neural network is the activating function of the neurons. The proposed MNN uses a simple function as the neuron's activating function to reduce the circuit size. Moreover, the proposed MNN has an on-chip learning capability. As the learning algorithm, a backpropagation algorithm is modified for effective hard-wave implementation. The proposed MNN is implemented on a field-programmable gate array (FPGA) to evaluate the learning performance and circuit size. This work was presented, in part, at the Third International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–21, 1998  相似文献   

9.
The geometrical learning of binary neural networks   总被引:12,自引:0,他引:12  
In this paper, the learning algorithm called expand-and-truncate learning (ETL) is proposed to train multilayer binary neural networks (BNN) with guaranteed convergence for any binary-to-binary mapping. The most significant contribution of this paper is the development of a learning algorithm for three-layer BNN which guarantees the convergence, automatically determining a required number of neurons in the hidden layer. Furthermore, the learning speed of the proposed ETL algorithm is much faster than that of backpropagation learning algorithm in a binary field. Neurons in the proposed BNN employ a hard-limiter activation function, with only integer weights and integer thresholds. Therefore, this will greatly facilitate actual hardware implementation of the proposed BNN using currently available digital VLSI technology  相似文献   

10.
二进制神经网络分类问题的几何学习算法   总被引:6,自引:0,他引:6  
朱大铭  马绍汉 《软件学报》1997,8(8):622-629
分类问题在前向神经网络研究中占有重要位置.本文利用几何方法给出一个二进制神经网络K(≥2)分类问题的新学习算法.算法通过训练点的几何位置与类别分析,建立一个四层前向神经网络,实现网络输入向量分类.本文算法的优点在于:保证学习收敛且收敛速度快于BP算法及已有的其他一些前向网络学习算法;算法可以确定神经网络的结构且能实现精确的向量分类.另外,算法所建神经网络由线性阀值单元组成,神经元突触权值和阀值均为整数,特别适合于集成电路实现.  相似文献   

11.
管惠维 《软件学报》1996,7(2):111-118
人工神经网络模型的软件模拟,其并行算法的设计、实现及性能评价对于神经网络计算机和各种专用神经网络VLSI芯片的研制具有十分重要的意义.本文首先构造了一个分布式存储器、信息传递方式的多机系统作为软件模拟人工神经网络的平台,并用一个环拓扑结构的多Transputer网络予以实现.接着提出并实现了一个适用于动态环拓扑形式的DBP并行计算模型,它主要包括神经元的划分和映射策略;DBP中活性值、误差反向传播及权值修改的多机并行算法.然后讨论该DBP算法的时间复杂度和加速比.  相似文献   

12.
提出了一种新的动态模糊自组织神经网络模型(TGFCM),并将其用于文本聚类中。针对传统模糊自组织神经网络需要预先确定聚类数的问题,TGFCM采用了可自动确定聚类数的动态自组织神经网络(TGSOM)的结构,在TGSOM网络结构中提出新的学习率计算式,并以模糊聚类中心作为TGFCM网络中对应的神经元的权值,从而提高了聚类的精度,并可提高收敛速度。  相似文献   

13.
In real-life applications of multilayer neural networks, the scale of integration, processing speed, and manufacturability are of key importance. A simple analog-signal synapse model is implemented on a standard 0.35 /spl mu/m CMOS process requiring no floating-gate capability. A neural-matrix of 2176 analog current-mode synapses arranged in eight layers of 16 neurons with 16 inputs each is constructed for the purpose of a fingerprint feature extraction application. Synapse weights are stored on the analog storage capacitors, and synapse nonlinearity with respect to weight is investigated. The capability of the synapse to operate in feedforward and learning modes is studied and demonstrated. The effect of the synapse's inherent quadratic nonlinearity on learning convergence and on the optimization of vector direction is analyzed. Transistor-level analog simulations verify the hardware circuit. System-level MatLab simulations verify the synapse mathematical model. The conclusion reached is that the proposed implementation is very suitable for large-scale artificial neural networks - especially if on-chip integration with other products on a standard CMOS process is required.  相似文献   

14.
We analyze the effects of analog noise on the synaptic arithmetic during multilayer perceptron training, by expanding the cost function to include noise-mediated terms. Predictions are made in the light of these calculations that suggest that fault tolerance, training quality and training trajectory should be improved by such noise-injection. Extensive simulation experiments on two distinct classification problems substantiate the claims. The results appear to be perfectly general for all training schemes where weights are adjusted incrementally, and have wide-ranging implications for all applications, particularly those involving "inaccurate" analog neural VLSI.  相似文献   

15.
Implementing a neural network on a digital or mixed analog and digital chip yields the quantization of the synaptic weights dynamics. This paper addresses this topic in the case of Kohonen's self-organizing maps. We first study qualitatively how the quantization affects the convergence and the properties, and deduce from this analysis the way to choose the parameters of the network (adaptation gain and neighborhood). We show that a spatially decreasing neighborhood function is far more preferable than the usually rectangular neighborhood function, because of the weight quantization. Based on these results, an analog nonlinear network, integrated in a standard CMOS technology, and implementing this spatially decreasing neighborhood function is then presented. It can be used in a mixed analog and digital circuit implementation.  相似文献   

16.
A new digital architecture of the frequency-based multilayer neural network (MNN) with on-chip learning is proposed. As the signal level is expressed by the frequency, the multiplier is replaced by a simple frequency converter, and the neuron unit uses the voting circuit as the nonlinear adder to improve the nonlinear characteristic. In addition, the pulse multiplier is employed to enhance the neuron characteristics. The backpropagation algorithm is modified for the on-chip learning. The proposed MNN architecture is implemented on field programmable gate arrays (FPGA) and the various experiments are conducted to test the performance of the system. The experimental results show that the proposed neuron has a very good nonlinear function owing to the voting circuit. The learning behavior of the MNN with on-chip learning is also tested by experiments, which show that the proposed MNN has good learning and generalization capabilities. Simple and modular structure of the proposed MNN leads to a massive parallel and flexible network architecture, which is well suited for VLSI implementation.  相似文献   

17.
CMOS current-mode neural associative memory design with on-chiplearning   总被引:1,自引:0,他引:1  
Based on the Grossberg mathematical model called the outstar, a modular neural net with on-chip learning and memory is designed and analyzed. The outstar is the minimal anatomy that can interpret the classical conditioning or associative memory. It can also be served as a general-purpose pattern learning device. To realize the outstar, CMOS (complimentary metal-oxide semiconductor) current-mode analog dividers are developed to implement the special memory called the ratio-type memory. Furthermore, a CMOS current-mode analog multiplier is used to implement the correlation. The implemented CMOS outstar can on-chip store the relative ratio values of the trained weights for a long time. It can also be modularized to construct general neural nets. HSPICE (a circuit simulator of Meta Software, Inc.) simulation results of the CMOS outstar circuits as associative memory and pattern learner have successfully verified their functions. The measured results of the fabricated CMOS outstar circuits have also successfully confirmed the ratio memory and on-chip learning capability of the circuits. Furthermore, it has been shown that the storage time of the ratio memory can be as long as five minutes without refreshment. Also the outstar can enhance the contrast of the stored pattern within a long period. This makes the outstar circuits quite feasible in many applications.  相似文献   

18.
Developed for the VLSI implementation of neural network models, our novel analog architecture adds flexibility and adaptability by incorporating digital processing capabilities. Its systolic-based architecture avoids static storage of analog values by transferring the activation values through the chip's processing units. This proposed combination of analog and digital technologies produces a densely packed, high-speed, scalable architecture, designed to easily accommodate learning capabilities  相似文献   

19.
A neural network IC based on a dynamic charge injection is described. The hardware design is space and power efficient, and achieves massive parallelism of analog inner products via charge-based multipliers and spatially distributed summing buses. Basic synaptic cells are constructed of exponential pulse-decay modulation (EPDM) dynamic injection multipliers operating sequentially on propagating signal vectors and locally stored analog weights. Individually adjustable gain controls on each neutron reduce the effects of limited weight dynamic range. A hardware simulator/trainer has been developed which incorporates the physical (nonideal) characteristics of actual circuit components into the training process, thus absorbing nonlinearities and parametric deviations into the macroscopic performance of the network. Results show that charge-based techniques may achieve a high degree of neural density and throughput using standard CMOS processes.  相似文献   

20.
基于双Kohonen神经网络的Web用户访问模式挖掘算法   总被引:1,自引:0,他引:1  
本文根据Kohonen自组织特征映射神经网络中学习阶段的性质,运用双Kohonen神经网络组合成新的自组织训练挖掘模型,先使用粗调整训练,加快模型学习速度,紧接着使用微调整训练,提高模型学习精度。实验结果表明,本文提出的双Kohonen神经网络挖掘模型,相对于标准Kohonen神经网络在训练速度和收敛效果上都有一定程度的提高,改善了聚类效果,为挖掘用户的多种兴趣提供了一种可行的方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号