首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In this paper, we show a neural network implementation in fixed time of adjustable order statistic filters, including sorting, and adaptive-order statistic filters. All these networks accept an array of N numbers X/sub i/=S/sub Xi/M/sub Xi/2/sup EXi/ as input (where S/sub Xi/ is the sign of X/sub i/, M/sub Xi/ is the mantissa normalized to m digits, and Ex is the exponent) and employ two kinds of neurons, the linear and the threshold-logic neurons, with only integer weights (most of the weights being just +1 or -1) and integer threshold. Therefore, this will greatly facilitate the actual hardware implementation of the proposed neural networks using currently available very large scale integration technology. An application of using minimum filter in implementing a special neural network model neural network classifier (NNC) is given. With a classification problem of l classes C/sub 1/,C/sub 2/,...,C/sub 1/, NNC classifies in fixed time an unknown vector to one class using a minimum-distance classification technique.  相似文献   

2.
The recursive training algorithm for the optimal interpolative (OI) classification network is extended to include distributed fault tolerance. The conventional OI Net learning algorithm leads to network weights that are nonoptimally distributed (in the sense of fault tolerance). Fault tolerance is becoming an increasingly important factor in hardware implementations of neural networks. But fault tolerance is often taken for granted in neural networks rather than being explicitly accounted for in the architecture or learning algorithm. In addition, when fault tolerance is considered, it is often accounted for using an unrealistic fault model (e.g., neurons that are stuck on or off rather than small weight perturbations). Realistic fault tolerance can be achieved through a smooth distribution of weights, resulting in low weight salience and distributed computation. Results of trained OI Nets on the Iris classification problem show that fault tolerance can be increased with the algorithm presented in this paper.  相似文献   

3.
A neural network approach to job-shop scheduling   总被引:6,自引:0,他引:6  
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.  相似文献   

4.
基于粗集和神经网络的石油测井数据挖掘方法   总被引:4,自引:0,他引:4  
由于石油测井数据存在着模糊性和噪声,在数据挖掘中单纯使用粗集方法会受噪 声干扰而直接影响分类精度,单纯使用神经网络会因输入信息空间维数较大时使网络结构复杂且训练时间长.为解决这些问题,根据测井解释原理,本文提出一种将两者结合起来的数据挖掘方法,即经过测井资料预处理、样本信息粗集方法简化、神经网络学习训练、待识信息网络识别和误差分析等步骤,其中使用的二层非线性连接权神经网络简化了网络的运算.通过岩性识别和储层参数定量计算两个应用实例,结果表明这种数据挖掘方法在测井解释中其识别率远高于其它单一数据挖掘方法,效果令人满意.  相似文献   

5.
针对CT图像肺结节分类任务中分类精度低,假阳性高的问题,提出了一种加权融合多维度卷积神经网络的肺结节分类模型,该模型包含两个子模型:基于二维图像的多尺度密集卷积网络模型,以捕获更宽泛的结节变化特征并促进特征重用;基于三维图像的三维卷积神经网络模型,以充分利用结节空间上下文信息。使用二维和三维CT图像训练子模型,根据子模型分类误差计算其权重,对子模型分类结果进行加权融合,得到最终分类结果。该模型在公共数据集LIDC-IDRI上分类准确率达到94.25%,AUC值达到98%。实验结果表明,加权融合多维度模型可以有效地提升肺结节分类性能。  相似文献   

6.
Sperduti and Starita proposed a new type of neural network which consists of generalized recursive neurons for classification of structures. In this paper, we propose an entropy-based approach for constructing such neural networks for classification of acyclic structured patterns. Given a classification problem, the architecture, i.e., the number of hidden layers and the number of neurons in each hidden layer, and all the values of the link weights associated with the corresponding neural network are automatically determined. Experimental results have shown that the networks constructed by our method can have a better performance, with respect to network size, learning speed, or recognition accuracy, than the networks obtained by other methods.  相似文献   

7.
针对粗糙集只能处理量化数据,容错和推广能力较差的缺点以及BP神经网络的维数灾难问题,提出1种基于信息熵的粗糙集属性离散化方法. 该方法利用粗糙集对属性进行约简,解决BP神经网络的维数灾难问题,并将BP神经网络用于模式分类补偿粗糙集属性约简用于模式分类时的不足. 实例分析表明该方法具有较好的故障诊断效果.  相似文献   

8.
区间型属性值及权重多属性决策问题的难点在于不确定权重信息的精确化和区间数的排序问题.灰熵模型中运用与理想解均衡接近的贴近度对方案排序的思想,不仅可以使多属性决策避开繁重的模糊数据精确化步骤,还可以有效解决方案排序时的点关联倾向问题.考虑到传统灰熵模型只适用于精确实数和指标权重缺失的缺陷,将灰关联熵引入传统灰熵模型,构建区间型权重属性值的灰熵模型,解决不确定数据精确化的难题.针对区间数排序难的难题,基于TOPSIS方法应用衍生变量接近度、均衡度再次逼近理想解的思想计算改进灰熵模型的均衡接近度对方案进行排序.最后,通过SG激光装置项目选择某种非标元器件供应商的算例验证了所提出模型的有效性.  相似文献   

9.
针对传统的流量分类方法准确率低、开销大、应用范围受限等问题,提出一种有效的网络流量分类方法(GA-LM)。该方法将基于神经网络的分类方法作为网络流量的分类模型,采用L-M算法构造分类器,并用遗传算法优化网络初始连接权值,加速了网络收敛过程,提高了分类性能。通过对收集到的实际网络流量数据进行分类,实验结果表明GA-LM比标准BP算法和L-M算法的收敛速度快,具有较好的可行性和高准确性,从而可有效地用于网络流量分类中。  相似文献   

10.
This paper presents a novel block-based neural network (BBNN) model and the optimization of its structure and weights based on a genetic algorithm. The architecture of the BBNN consists of a 2D array of fundamental blocks with four variable input/output nodes and connection weights. Each block can have one of four different internal configurations depending on the structure settings, The BBNN model includes some restrictions such as 2D array and integer weights in order to allow easier implementation with reconfigurable hardware such as field programmable logic arrays (FPGA). The structure and weights of the BBNN are encoded with bit strings which correspond to the configuration bits of FPGA. The configuration bits are optimized globally using a genetic algorithm with 2D encoding and modified genetic operators. Simulations show that the optimized BBNN can solve engineering problems such as pattern classification and mobile robot control.  相似文献   

11.
A new derivation is presented for the bounds on the size of a multilayer neural network to exactly implement an arbitrary training set; namely the training set can be implemented with zero error with two layers and with the number of the hidden-layer neurons equal to #1>/= p-1. The derivation does not require the separation of the input space by particular hyperplanes, as in previous derivations. The weights for the hidden layer can be chosen almost arbitrarily, and the weights for the output layer can be found by solving #1+1 linear equations. The method presented exactly solves (M), the multilayer neural network training problem, for any arbitrary training set.  相似文献   

12.
Precision constrained stochastic resonance in a feedforward neural network   总被引:1,自引:0,他引:1  
Stochastic resonance (SR) is a phenomenon in which the response of a nonlinear system to a subthreshold information-bearing signal is optimized by the presence of noise. By considering a nonlinear system (network of leaky integrate-and-fire (LIF) neurons) that captures the functional dynamics of neuronal firing, we demonstrate that sensory neurons could, in principle harness SR to optimize the detection and transmission of weak stimuli. We have previously characterized this effect by use of signal-to-noise ratio (SNR). Here in addition to SNR, we apply an entropy-based measure (Fisher information) and compare the two measures of quantifying SR. We also discuss the performance of these two SR measures in a full precision floating point model simulated in Java and in a precision limited integer model simulated on a field programmable gate array (FPGA). We report in this study that stochastic resonance which is mainly associated with floating point implementations is possible in both a single LIF neuron and a network of LIF neurons implemented on lower resolution integer based digital hardware. We also report that such a network can improve the SNR and Fisher information of the output over a single LIF neuron.  相似文献   

13.
A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of limited numerical precision can be used with existing learning algorithms. An empirical study of the effects of limited precision in cascade-correlation networks on three different learning problems is presented. It is shown that learning can fail abruptly as the precision of network weights or weight-update calculations is reduced below a certain level, typically about 13 bits including the sign. Techniques for dynamic rescaling and probabilistic rounding that allow reliable convergence down to 7 bits of precision or less, with only a small and gradual reduction in the quality of the solutions, are introduced.  相似文献   

14.
Evolving neurocontrollers for balancing an inverted pendulum   总被引:1,自引:0,他引:1  
This paper introduces an evolutionary algorithm that is tailored to generate recurrent neural networks functioning as nonlinear controllers. Network size and architecture, as well as network parameters like weights and bias terms, are developed simultaneously. There is no quantization of inputs, outputs or internal parameters. Different kinds of evolved networks are presented that solve the pole-balancing problem, i.e. balancing an inverted pendulum. In particular, controllers solving the problem for reduced phase space information (only angle and cart position) use a recurrent connectivity structure. Evolved controllers of 'minimal' size still have a very good benchmark performance.  相似文献   

15.
A hybrid algorithm combining Regrouping Particle Swarm Optimization (RegPSO) with wavelet radial basis function neural network referred to as (RegPSO-WRBF-NN) algorithm is presented which is used to detect, identify and characterize the acoustic signals due to surface discharge activity and hence differentiate abnormal operating conditions from the normal ones. The tests were carried out on clean and polluted high-voltage glass insulators by using surface tracking and erosion test procedure of international electro-technical commission 60,587. A laboratory experiment was conducted by preparing the prototypes of the discharges. A very important step for the WRBF network training is to decide a proper number of hidden nodes, centers, spreads and the network weights can be viewed as a system identification problem. So PSO is used to optimize the WRBF neural network parameters in this work.Therefore, the combination method based on the WRBF neural network is adapted. A regrouping technique called as a Regrouping Particle Swarm Optimization (RegPSO) is also used to help the swarm escape from the state of premature convergence, RegPSO was able to solve the stagnation problem for the surface discharge dataset tested and approximate the true global minimizer. Testing results indicate that the proposed approach can make a quick response and yield accurate solutions as soon as the inputs are given. Comparisons of learning performance are made to the existing conventional networks. This learning method has proven to be effective by applying the wavelet radial basis function based on the RegPSO neural network in the classification of surface discharge fault data set. The test results show that the proposed approach is efficient and revealed a very high classification rate.  相似文献   

16.
为了减少航天器特征值属性的冗余性并提高其权重的准确性,提出了一种基于邻域粗糙集的属性约简及权重计算方法。通过对不同重要度下限分类精度的对比分析,给出了确定邻域半径的新规则。在信息观权值最优计算公式的基础上,提出了一种基于信息熵的特征值权重计算方法;给出了代数观和信息观最优组合权值确定方法,解决了代数观和信息观方法的权衡问题。将其应用于某卫星姿控系统特征值分析中,与其他方法的比较表明该方法能有效减少特征值的数目,提高特征值权重的准确性。  相似文献   

17.
一种用于非线性函数逼近的小波神经网络   总被引:4,自引:0,他引:4  
提出一种用于非线性函数逼近的小波神经网络,给出了网络的参数训练方法。从信息熵的概念出发,改进了网络参数训练的目标函数,并利用引入动量项的最速下降法训练网络权值、尺度因子和平移因子。仿真实验表明,该小波神经网络用于非线性函数逼近时优于同等规模的BP网络,且其训练方法亦具有收敛速度快、逼近精度高等优点。  相似文献   

18.
Neural networks require VLSI implementations for on-board systems. Size and real-time considerations show that on-chip learning is necessary for a large range of applications. A flexible digital design is preferred here to more compact analog or optical realizations. As opposed to many current implementations, the two-dimensional systolic array system presented is an attempt to define a novel computer architecture inspired by neurobiology. It is composed of generic building blocks for basic operations rather than predefined neural models. A full custom VLSI design of a first prototype has demonstrated the efficacy of this design. A complete board dedicated to Hopfield's model has been designed using these building blocks. Beyond the very specific application presented, the underlying principles can be used for designing efficient hardware for most neural network models.  相似文献   

19.
A new method of inter-neuron communication called incremental communication is presented. In the incremental communication method, instead of communicating the whole value of a variable, only the increment or decrement of its previous value is sent on a communication link. The incremental value may be either a fixed-point or a floating-point value. Multilayer feedforward network architecture is used to illustrate the effectiveness of the proposed communication scheme. The method is applied to three different learning problems and the effect of the precision of incremental input-output values of the neurons on the convergence behavior is examined. It is shown through simulation that for some problems even four-bit precision in fixed- and/or floating-point representations is sufficient for the network to converge. With 8-12 bit precisions almost the same results are obtained as that with the conventional communication using 32-bit precision. The proposed method of communication can lead to significant savings in the intercommunication cost for implementations of artificial neural networks on parallel computers as well as the interconnection cost of direct hardware realizations. The method can be incorporated into most of the current learning algorithms in which inter-neuron communications are required. Moreover, it can be used along with the other limited precision strategies for representation of variables suggested in literature.  相似文献   

20.
In this paper, Parallel Evolutionary Algorithms for integer weightneural network training are presented. To this end, each processoris assigned a subpopulation of potential solutions. Thesubpopulations are independently evolved in parallel andoccasional migration is employed to allow cooperation betweenthem. The proposed algorithms are applied to train neural networksusing threshold activation functions and weight values confined toa narrow band of integers. We constrain the weights and biases inthe range [–3, 3], thus they can be represented by just 3 bits.Such neural networks are better suited for hardware implementationthan the real weight ones. These algorithms have been designedkeeping in mind that the resulting integer weights require lessbits to be stored and the digital arithmetic operations betweenthem are easier to be implemented in hardware. Another advantageof the proposed evolutionary strategies is that they are capableof continuing the training process ``on-chip', if needed. Ourintention is to present results of parallel evolutionaryalgorithms on this difficult task. Based on the application of theproposed class of methods on classical neural network problems,our experience is that these methods are effective and reliable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号