首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
The selection of weight accuracies for Madalines   总被引:4,自引:0,他引:4  
The sensitivity of the outputs of a neural network to perturbations in its weights is an important consideration in both the design of hardware realizations and in the development of training algorithms for neural networks. In designing dense, high-speed realizations of neural networks, understanding the consequences of using simple neurons with significant weight errors is important. Similarly, in developing training algorithms, it is important to understand the effects of small weight changes to determine the required precision of the weight updates at each iteration. In this paper, an analysis of the sensitivity of feedforward neural networks (Madalines) to weight errors is considered. We focus our attention on Madalines composed of sigmoidal, threshold, and linear units. Using a stochastic model for weight errors, we derive simple analytical expressions for the variance of the output error of a Madaline. These analytical expressions agree closely with simulation results. In addition, we develop a technique for selecting the appropriate accuracy of the weights in a neural network realization. Using this technique, we compare the required weight precision for threshold versus sigmoidal Madalines. We show that for a given desired variance of the output error, the weights of a threshold Madaline must be more accurate.  相似文献   

2.
基于权值动量的RBM加速学习算法研究   总被引:1,自引:0,他引:1  
李飞  高晓光  万开方 《自动化学报》2017,43(7):1142-1159
动量算法理论上可以加速受限玻尔兹曼机(Restricted Boltzmann machine,RBM)网络的训练速度.本文通过对现有动量算法进行仿真研究,发现现有动量算法在受限玻尔兹曼机网络训练中加速效果较差,且在训练后期逐渐失去了加速性能.针对以上问题,本文首先基于Gibbs采样收敛性定理对现有动量算法进行了理论分析,证明了现有动量算法的加速效果是以牺牲网络权值为代价的;然后,本文进一步对网络权值进行研究,发现网络权值中包含大量真实梯度的方向信息,这些方向信息可以用来对网络进行训练;基于此,本文提出了基于网络权值的权值动量算法,最后给出了仿真实验.实验结果表明,本文提出的动量算法具有更好的加速效果,并且在训练后期仍然能够保持较好的加速性能,可以很好地弥补现有动量算法的不足.  相似文献   

3.
Kernel orthonormalization in radial basis function neural networks   总被引:7,自引:0,他引:7  
This paper deals with optimization of the computations involved in training radial basis function (RBF) neural networks. The main contribution of the reported work is the method for network weights calculation, in which the key idea is to transform the RBF kernels into an orthonormal set of functions (using the standard Gram-Schmidt orthogonalization). This significantly reduces the computing time if the RBF training scheme, which relies on adding one kernel hidden node at a time to improve network performance, is adopted. Another property of the method is that, after the RBF network weights are computed, the original network structure can be restored back. An additional strength of the method is the possibility to decompose the proposed computing task into a number of parallel subtasks so gaining further savings on computing time. Also, the proposed weight calculation technique has low storage requirements. These features make the method very attractive for hardware implementation. The paper presents a detailed derivation of the proposed network weights calculation procedure and demonstrates its validity for RBF network training on a number of data classification and function approximation problems.  相似文献   

4.
This paper presents a constructive approach to estimating the size of a neural network necessary to solve a given classification problem. The results are derived using an information entropy approach in the context of limited precision integer weights. Such weights are particularly suited for hardware implementations since the area they occupy is limited, and the computations performed with them can be efficiently implemented in hardware. The considerations presented use an information entropy perspective and calculate lower bounds on the number of bits needed in order to solve a given classification problem. These bounds are obtained by approximating the classification hypervolumes with the volumes of several regular (i.e., highly symmetric) n-dimensional bodies. The bounds given here allow the user to choose the appropriate size of a neural network such that: (i) the given classification problem can be solved, and (ii) the network architecture is not oversized. All considerations presented take into account the restrictive case of limited precision integer weights, and therefore can be directly applied when designing VLSI implementations of neural networks.  相似文献   

5.
The use of multilayer perceptrons (MLP) with threshold functions (binary step function activations) greatly reduces the complexity of the hardware implementation of neural networks, provides tolerance to noise and improves the interpretation of the internal representations. In certain case, such as in learning stationary tasks, it may be sufficient to find appropriate weights for an MLP with threshold activation functions by software simulation and, then, transfer the weight values to the hardware implementation. Efficient training of these networks is a subject of considerable ongoing research. Methods available in the literature mainly focus on two-state (threshold) nodes and try to train the networks by approximating the gradient of the error function and modifying appropriately the gradient descent, or by progressively altering the shape of the activation functions. In this paper, we propose an evolution-motivated approach, which is eminently suitable for networks with threshold functions and compare its performance with four other methods. The proposed evolutionary strategy does not need gradient related information, it is applicable to a situation where threshold activations are used from the beginning of the training, as in “on-chip” training, and is able to train networks with integer weights.  相似文献   

6.
现有的网络表示学习算法主要为基于浅层神经网络的网络表示学习和基于神经矩阵分解的网络表示学习。基于浅层神经网络的网络表示学习又被证实是分解网络结构的特征矩阵。另外,现有的大多数网络表示学习仅仅从网络的结构学习特征,即单视图的表示学习;然而,网络本身蕴含有多种视图。因此,文中提出了一种基于多视图集成的网络表示学习算法(MVENR)。该算法摈弃了神经网络的训练过程,将矩阵的信息融合和分解思想融入到网络表示学习中。另外,将网络的结构视图、连边权重视图和节点属性视图进行了有效的融合,弥补了现有网络表示学习中忽略了网络连边权重的不足,解决了基于单一视图训练时网络特征稀疏的问题。实验结果表明,所提MVENR算法的性能优于网络表示学习中部分常用的联合学习算法和基于结构的网络表示学习算法,是一种简单且高效的网络表示学习算法。  相似文献   

7.
提出了一种面向片上系统(SoC)的RBF神经网络的软测量算法,在OMAP-L137双核处理器SoC 硬件平台上成功实现了整个训练与预测算法.针对SoC计算速度和存储空间等资源有限,对网络结构、权值更新模式和步长以及数据预处理方式等参数提出了具体的解决方案.经过相关数据集的测试结果表明:提出的算法移植方法完全满足工业应用...  相似文献   

8.
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.  相似文献   

9.

针对现有的利用非线性滤波算法对神经网络进行训练中存在滤波精度受限和效率不高的缺陷, 提出一种基于容积卡尔曼滤波(CKF) 的神经网络训练算法. 在算法实现过程中, 首先构建神经网络的状态空间模型; 然后将网络连接权值作为系统的状态参量, 并采用三阶Spherical-Radial 准则生成的容积点实现神经网络中节点连接权值的训练. 理论分析和仿真结果验证了所提出算法的可行性和有效性.

  相似文献   

10.
自适应神经网络学习方法研究   总被引:12,自引:0,他引:12  
本文从连接权值、网络的拓扑结构、网络的学习参数以及神经元的激活特性等不同方面分别讨论了人工神经网络的学习问题,并就当前流行的BP模型提出了具体实现方法。实验表明,这些方法对于加快网络的收敛速度,优化网络的拓扑结构等方面有着显著成效,本文所述内容为ANN学习算法的改进与设计提供了示例,途径和思想总结。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号