首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 525 毫秒
1.
Due to the variety of architectures that need be considered while attempting solutions to various problems using neural networks, the implementation of a neural network with programmable topology and programmable weights has been undertaken. A new circuit block, the distributed neuron-synapse, has been used to implement a 1024 synapse reconfigurable network on a VLSI chip. In order to evaluate the performance of the VLSI chip, a complete test setup consisting of hardware for configuring the chip, programming the synaptic weights, presenting analog input vectors to the chip, and recording the outputs of the chip, has been built. Following the performance verification of each circuit block on the chip, various sample problems were solved. In each of the problems the synaptic weights were determined by training the neural network using a gradient-based learning algorithm which is incorporated in the experimental test setup. The results of this work indicate that reconfigurable neural networks built using distributed neuron synapses can be used to solve various problems efficiently  相似文献   

2.
A great deal of interest has emerged recently in the field of Boolean neural networks. Boolean neural networks require far less training than the conventional neural networks and have a variety of applications. They are also strong candidates for VLSI design. In this paper, a technique for learning representation of an adder-subtractor cell has been proposed. The technique can be exploited for the VLSI design of an arithmetic unit for a pipelined digital computer.  相似文献   

3.
Time-critical neural network applications that require fully parallel hardware implementations for maximal throughput are considered. The rich array of technologies that are being pursued is surveyed, and the analog CMOS VLSI medium approach is focused on. This medium is messy in that limited dynamic range, offset voltages, and noise sources all reduce precision. The authors examine how neural networks can be directly implemented in analog VLSI, giving examples of approaches that have been pursued to date. Two important application areas are highlighted: optimization, because neural hardware may offer a speed advantage of orders of magnitude over other methods; and supervised learning, because of the widespread use and generality of gradient-descent learning algorithms as applied to feedforward networks  相似文献   

4.
Evolving artificial neural networks   总被引:45,自引:0,他引:45  
Learning and evolution are two fundamental forms of adaptation. There has been a great interest in combining learning and evolution with artificial neural networks (ANNs) in recent years. This paper: 1) reviews different combinations between ANNs and evolutionary algorithms (EAs), including using EAs to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EAs; and 3) points out possible future research directions. It is shown, through a considerably large literature review, that combinations between ANNs and EAs can lead to significantly better intelligent systems than relying on ANNs or EAs alone  相似文献   

5.
In this paper, we propose an efficient knowledge-based automatic model generation (KAMG) technique aimed at generating microwave neural models of the highest possible accuracy using the fewest accurate data. The technique is comprehensively derived to integrate three distinct powerful concepts, namely, automatic model generation, knowledge neural networks, and space mapping. For the first time, we simultaneously utilize two types of data generators, namely, coarse data generators that are approximate and fast (e.g., two-and-one-half-dimensional electromagnetic), and fine data generators that are accurate and slow (e.g., three-dimensional electromagnetic). Motivated by the space-mapping concept, the KAMG technique utilizes extensive coarse data, but fewest fine data to generate neural models that accurately match the fine data. Our formulation exploits a variety of knowledge neural-network architectures to facilitate reinforced neural-network learning from coarse and fine data. During neural model generation by KAMG, both coarse and fine data generators are automatically driven using adaptive sampling. The KAMG technique helps to increase the efficiency of neural model development by taking advantage of a microwave reality, i.e., availability of multiple sources of training data for most high-frequency components. The advantages of the proposed KAMG technique are demonstrated through practical microwave examples of MOSFET and embedded passive components used in multilayer printed circuit boards.  相似文献   

6.
A reconfigurable low-voltage low-power cell that can function either as a synapse or a neuron is proposed and analyzed in this article for the VLSI implementation of artificial neural networks (ANNs). The measured results are also presented. The design is based on the current-mode approach and uses the square-law characteristics of an MOS transistor working in saturation. The proposed fabricated synapse/neuron cell utilizes IV converters, current mirror, and a ±1 V power supply to achieve superior performance. Modularity, ease of interconnectivity, expandability and reconfigurability are the main advantages of this cell.  相似文献   

7.
Neurofuzzy systems-the combination of artificial neural networks with fuzzy logic-have become useful in many application domains. However, conventional neurofuzzy models usually need enhanced representation power for applications that require context and state (e.g., speech, time series prediction, control). Some of these applications can be readily modeled as finite state automata. Previously, it was proved that deterministic finite state automata (DFA) can be synthesized by or mapped into recurrent neural networks by directly programming the DFA structure into the weights of the neural network. Based on those results, a synthesis method is proposed for mapping fuzzy finite state automata (FFA) into recurrent neural networks. Furthermore, this mapping is suitable for direct implementation in very large scale integration (VLSI), i.e., the encoding of FFA as a generalization of the encoding of DFA in VLSI systems. The synthesis method requires FFA to undergo a transformation prior to being mapped into recurrent networks. The neurons are provided with an enriched functionality in order to accommodate a fuzzy representation of FFA states. This enriched neuron functionality also permits fuzzy parameters of FFA to be directly represented as parameters of the neural network. We also prove the stability of fuzzy finite state dynamics of the constructed neural networks for finite values of network weight and, through simulations, give empirical validation of the proofs. Hence, we prove various knowledge equivalence representations between neural and fuzzy systems and models of automata  相似文献   

8.
A unified approach is presented for deriving a large class of new and previously known time and order recursive least-squares algorithms with systolic array architectures, suitable for high throughput rate and VLSI implementations of space-time filtering and system identification problems. The geometrical derivation given is unique in that no assumption is made concerning the rank of the sample data correlation matrix. This method utilizes and extends the concept of oblique projections, as used previously in the derivations of the least-squares lattice algorithms. Both the growing and sliding memory, exponentially weighted least-squares criteria are considered  相似文献   

9.
A massively parallel architecture called the mesh-of-appendixed-trees (MAT) is shown to be suitable for processing artificial neural networks (ANNs). Both the recall and the learning phases of the multilayer feedforward with backpropagation ANN model are considered. The MAT structure is refined to produce two special-purpose array processors; FMAT1 and FMAT2, for efficient ANN computation. This refinement tends to reduce circuit area and increase hardware utilization. FMAT1 is a simple structure suitable for the recall phase. FMAT2 requires little extra hardware but supports learning as well. A major characteristic of the proposed neurocomputers is high performance. It takesO (logN) time to process a neural network withN neurons in its largest layer. Our proposed architecture is shown to provide the best number of connections per unit time when compared to several major techniques in the literature. Another important feature of our approach is its ability to pipeline more than one input pattern which further improves the performance.The authors acknowledge the support of the NSF and State of Louisiana grant NSF/LEQSF (1992–96)-ADP-04.  相似文献   

10.
Mathematical foundations of neurocomputing   总被引:4,自引:0,他引:4  
An attempt is made to establish a mathematical theory that shows the intrinsic mechanisms, capabilities, and limitations of information processing by various architectures of neural networks. A method of statistically analyzing one-layer neural networks is given, covering the stability of associative mapping and mapping by totally random networks. A fundamental problem of statistical neurodynamics is considered in a way that is different from the spin-glass approach. A dynamic analysis of associative memory models and a general theory of neural learning, in which the learning potential function plays a role, are given. An advanced theory of learning and self-organization is proposed, covering backpropagation and its generalizations as well as the formation of topological maps and neural representations of information  相似文献   

11.
This paper presents a novel approach to the field-oriented control (FOC) of induction motor drives. It discusses the introduction of artificial neural networks (ANNs) for decoupling control of induction motors using FOC principles. Two ANNs are presented for direct and indirect FOC applications. The first performs an estimation of the stator flux for direct field orientation, and the second is trained to map the nonlinear behavior of a rotor-flux decoupling controller. A decoupling controller and flux estimator were implemented upon these ANNs using the MATLAB/SIMULINK neural-network toolbox. The data for training are obtained from a computer simulation of the system and experimental measurements. The methodology used to train the networks with the backpropagation learning process is presented. Simulation results reveal some very interesting features and show that the networks have good potential for use as an alternative to the conventional field-oriented decoupling control of induction motors  相似文献   

12.
Example-based learning, as performed by neural networks and other approximation and classification techniques, is both computationally intensive and I/O intensive, typically Involving the optimization of hundreds or thousands of parameters during repeated network evaluations over a database of example vectors. Although there Is currently no dominant approach or technique among the various neural networks and learning algorithms, the basic functionality of most neural networks can be conceptually realized as a multidimensional look-up table. While multidimensional look-up tables are clearly impractical due to the exponential memory requirements, we are pursuing an approach using interpolation based only on the sparse data provided by an initial example database. In particular, we have designed prototype VLSI components for searching multidimensional example databases for the X closest examples to an input query as determined by a programmable metric using a massively parallel search. This nearest-neighbor approach can be used directly for classification, or in conjunction with any number of neural network algorithms that exploit local fitting. The hardware removes the I/O bottleneck from the learning task by supplying a reduced set of examples for localized training or classification. Though nearest-neighbor retrieval algorithms have efficient software implementations for low-dimensional databases, exhaustive searching is the only effective approach for handling high-dimensional data. The parallel VLSI hardware we have designed can accelerate the exhaustive search by three orders of magnitude. We believe this special purpose VLSI will have direct application in systems requiring learning functionality and in accelerating learning applications on large, high-dimensional databases  相似文献   

13.
A general methodology for the development of physically realistic fault models for VLSI neural networks is presented. The derived fault models are explained and characterized in detail. The application of this methodology to an analog CMOS implementation of fixed-weight (i.e., pretrained), binary-valued neural networks is reported. It is demonstrated that these techniques can be used to accurately evaluate defect sensitivities in VLSI neural network circuitry. It is also shown that this information can be used to guide the design of circuitry which fully utilizes a neural network's potential for defect tolerance  相似文献   

14.
In many scientific and signal processing applications, there are increasing demands for large-volume and/or high-speed computations which call for not only high-speed computing hardware, but also for novel approaches in computer architecture and software techniques in future supercomputers. Tremendous progress has been made on several promising parallel architectures for scientific computations, including a variety of digital filters, fast Fourier transform (FFT) processors, data-flow processors, systolic arrays, and wavefront arrays. This paper describes these computing networks in terms of signal-flow graphs (SFG) or data-flow graphs (DFG), and proposes a methodology of converting SFG computing networks into synchronous systolic arrays or data-driven wavefront arrays. Both one- and two-dimensional arrays are discussed theoretically, as well as with illustrative examples. A wavefront-oriented programming language, which describes the (parallel) data flow in systolic/wavefront-type arrays, is presented. The structural property of parallel recursive algorithms points to the feasibility of a Hierarchical Iterative Flow-Graph Design (HIFD) of VLSI Array Processors. The proposed array processor architectures, we believe, will have significant impact on the development of future supercomputers.  相似文献   

15.
An analog continuous-time neural network is described. Building blocks which include the capability for on-chip learning and an example network are described and test results are presented. We are using analog nonvolatile CMOS floating-gate memories for storage of the neural weights. The floating-gate memories are programmed by illuminating the entire chip with ultraviolet light. The subthreshold operation of the CMOS transistor in analog VLSI has a very low power dissipation which can be utilized to build larger computational systems, e.g., neural networks. The experimental results show that the floating-gate memories are promising, and that the building blocks are operating as separate units; however, especially the time constants involved in the computations of the continuous-time analog neural network should be studied further.  相似文献   

16.
In this article, recent research activities on the development of electronic neural networks in Japan are reviewed. Most of the largest Japanese electronic companies have developed VLSI neural chips using analog, digital or optoelectronic circuits. They have run various neural networks on them. Recently, in Japan, digital approach becomes active. Several fully-digital VLSI chips for on-chip BP learning have been developed, and 2.3 GCUPS (Giga Connection Updates per Second) learning speed has already been attained. Although the numbers of neurons and synapses containable in single digital chips are small, a large neural network can be developed by cascading the chips. By cascading 72 chips, a fully interconnected PDM (Pulse Density Modulating) digital neural network system has been developed. The behavior of the system follows simultaneous nonlinear differential equations and the processing speed amounts to 12 GCPS (Giga Connections per Second).Intensive researches on analog and optoelectronic approaches have also been carried out in Japan. An analog VLSI neural chip attains 28 GCUPS on-chip learning speed and 1 TCPS (Tera Connections per Second) processing speed for Boltzmann machine with 1 bit digital output. For the optoelectronic approach, although the network size is small, 640 MCUPS BP learning speed has been attained.  相似文献   

17.
A method of designing testable systolic architectures is proposed in this paper. Testing systolic arrays involves mapping of an algorithm into a specific VLSI systolic architecture, and then modifying the design to achieve concurrent testing. In our approach, redundant computations are introduced at the algorithmic level by deriving two versions of a given algorithm. The transformed dependency matrix (TDM) of the first version is a valid transformation matrix while the second version is obtained by rotating the first TDM by 180 degrees about any of the indices that represent the spatial component of the TDM. Concurrent error detection (CED) systolic array is constructed by merging the corresponding systolic array of the two versions of the algorithm. The merging method attempts to obtain the self testing systolic array at minimal cost in terms of area and speed. It is based on rescheduling input data, rearranging data flow, and increasing the utilization of the array cells. The resulting design can detect all single permanent and temporary faults and the majority of the multiple fault patterns with high probability. The design method is applied to an algorithm for matrix multiplication in order to demonstrate the generality and novelty of our approach to design testable VLSI systolic architectures.This work has been supported by a grant from the Natural Sciences and Engineering Research Council of Canada.  相似文献   

18.
The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision-making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision-making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision-making. The performance of the proposed intelligent decision-making system is evaluated by mapping the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches  相似文献   

19.
This article presents new approaches for testing VLSI array architectures used in the computation of the complexN-point Fast Fourier Transform. Initially, an unrestricted single cell-level fault model is considered. The first proposed approach is based on a process whose complexity is independent (or C- as constant) of the number of cells in the FFT architecture. This is accomplished by showing a topological equivalence between the FFT array and a linear (one-dimensional) array. The process of fault location is also analyzed. The second proposed method is based on a testing process whose complexity is linear with respect to the number of stages (columns) of the FFT array. A component-level fault model is also proposed and analyzed. The implications of this model on the C-testability process are fully described.This research is supported by grants from NSF and NSERC.  相似文献   

20.
A new technique called resistive interpolation biasing for accurately biasing a large number of analog cells on a VLSI chip is presented. Variations in oxide thickness, mobility, doping concentration, etc., cause inaccuracies in current ratios of two identically biased transistors if they are placed sufficiently far apart on a chip. The proposed technique compensates for these inaccuracies without using any sampling or switching. The technique has been verified using a 2 μm n-well CMOS process. Measurements show a factor of 3 improvement in terms of current ratio accuracy when the resistive interpolation technique is used. The circuit can be implemented with a small chip area and low power dissipation. This technique finds applications where extensive current duplication over a large area is required (e.g., analog memories, D/A converters, continuous-time filters, imaging arrays, neural networks, and fuzzy logic systems)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号