首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 234 毫秒
1.
泛函网络是神经网络的一般化推广,同神经网络一样,至今还没有系统设计方法能够对给定问题设计出近似最优的结构。鉴于此,利用熵聚类的思想来设计泛函网络,对网络每一神经元的基函数和泛函参数共存且相互影响的最优搜索来实现泛函网络结构和泛函参数的共同学习。提出一基于熵聚类思想来设计泛函网络的方法,有效地提高了泛函网络的收敛精度,并可获得更为合理的网络结构。  相似文献   

2.
泛函网络是神经网络的一般化推广,至今还没有统一的系统设计方法能够对给定问题设计出近似最优的结构.为了获得良好的网络结构,本文利用熵聚类的思想,提出一基于熵聚类思想的设计泛函网络的方法,对网络每一神经元的共存且相互影响的基函数和泛函参数进行最优搜索,实现泛函网络结构和泛函参数的共同学习.对一非线性函数进行逼近比较仿真实验,结果表明,逼近效果较好,且收敛速度较快,并表明所设计的泛函网络有效地提高了泛函网络的收敛精度,还可获得更为合理的网络结构.  相似文献   

3.
基于遗传规划实现泛函网络神经元函数类型优化   总被引:1,自引:0,他引:1  
泛函网络是最近提出的一种对神经网络的有效推广。与神经网络不同,它处理的是一般的泛函模型,其神经元函数不固定,而是可学习的,且在各个处理单元之间没有权值。同神经网络一样,至今还没有系统设计方法能够对给定问题设计出近似最优的结构。鉴于此,将整个泛函网络的设计分解为单个神经元的逐个设计;然后,在此框架下提出了基于遗传规划的单个神经元的设计方法,该方法可实现对神经元函数类型的优化。仿真实验表明,本方法是有效可行的,能用较小的网络规模获得更满意的泛化特性。  相似文献   

4.
提出了一种进化泛函网络的建模与函数逼近方法,该方法把泛函网络建模过程转变为结构和泛函参数的优化搜索过程,利用遗传规划设计泛函网络神经元函数,对网络结构和参数共存且相互影响的复杂解空间进行全局最优搜索,实现泛函网络结构和参数的共同学习,并用混合基函数实现目标函数的逼近,改变了人们通常用同类型基函数来实现目标函数逼近的方式.数值仿真结果表明,提出的网络建模与逼近方法具有较高的逼近精度.  相似文献   

5.
将泛函神经元结构变形,建立Sigma-Pi泛函网络模型,给出Sigma-Pi泛函网络学习算法。采用数值分析的方法,将Sigma-Pi泛函网络应用于异或问题,结果表明,该网络对于某些问题具有很强的分类能力。该方法的优点在于利用一元函数作为基函数来实现高维函数的逼近,在函数逼近技术上,有着重要的应用价值。  相似文献   

6.
用构造性方法证明:对于给定的r阶多项式函数,可以具体地构造出一个三层泛函网络,以任意精度逼近该多项式,所构造的网络的中问神经元个数仅与多项式基函数的阶数r有关,并能用r表达.该文所得结果对于基于多项式基函数的泛函网络逼近任意函数类的网络具体构造和逼近具有理论指导意义.  相似文献   

7.
模糊对向传播神经网络及其应用   总被引:9,自引:0,他引:9  
通过把对向传播(CP)神经网络的竞争层神经元的输出函数定义为模糊隶属度函 数,提出了模糊对向传播(FCP)神经网络.该网络是CP网络的推广,它不仅能有效克服CP 存在的问题,而且具有全局函数逼近能力.在结构上,FCP网络同径向基函数(RBF)网络是等 价的.实际上,它是一种RBF网络,而且还是一种模糊基函数网络.FCP在时间序列预测中的 应用表明,FCP不仅在学习精度上,而且在泛化能力方面较之CP和RBF均有较大的改善.  相似文献   

8.
肖倩  周永权  陈振 《计算机科学》2013,40(1):203-207
将泛函神经元结构做了一个变形,给出了一种基函数可递归的泛函神经元网络学习算法,该算法借助于矩阵伪逆递归求解方法,完成对泛函神经元网络基函数的自适应调整,最终实现泛函网络结构和参数共同的最优求解。数值仿真实验结果表明,该算法具有自适应性、鲁棒性和较高的收敛精度,将在实时在线辨识中有着广泛的应用。  相似文献   

9.
基于泛函网络的结构特点和遗传规划的全局搜索能力,提出了广义基函数概念,通过改进遗传规划的编码方式对广义基函数进行学习,用最小二乘法设计适应度函数,从而确定泛函网络的最佳逼近结构模型。最后,4个数值仿真实例表明,该方法是有效可行的,具有较强的泛化特性。  相似文献   

10.
一种基于模糊径向基函数神经网络的自学习控制器   总被引:3,自引:0,他引:3  
提出了一种新型的基于模糊径向基函数 (RBF)的神经网络学习控制器 ,并应用于电液伺服系统 .由于RBF网络和模糊推理系统具有函数等价性 ,采用模糊经验值方法选取网络中心值和基函数数目 .与一般的神经网络自学习控制器不同 ,以系统动态误差作为网络输入量 ,RBF神经网络控制器学习的是整个系统的动态逆过程 ,因而控制性能明显提高 .对电液位置伺服系统的仿真和实验结果表明 ,该控制方案可以有效提高系统的控制精度和自适应能力  相似文献   

11.
由经典的函数逼近理论衍生的很多数值算法有共同的缺点:计算量大、适应性差、对模型和数据要求高,在实际应用中受到限制。神经网络可以被用来计算复杂输入与输出结果之间的关系,故神经网络具有很强的函数逼近功能。该文给出了径向基函数网络(RBFNN)的结构及学习过程,重点阐述了RBFNN在函数逼近、求解非线性方程组以及散乱数据插值中的应用,结合MATLAB神经网络工具箱给出了数值实例,并与BP网络进行了比较。应用结果表明RBFNN是数值计算的一个有力工具,与传统方法比较具有编程简单、实用的特点,在工程和科学研究上若将其制成软件包则具有很好的使用价值。  相似文献   

12.
Although artificial neural networks have taken their inspiration from natural neurological systems, they have largely ignored the genetic basis of neural functions. Indeed, evolutionary approaches have mainly assumed that neural learning is associated with the adjustment of synaptic weights. The goal of this paper is to use evolutionary approaches to find suitable computational functions that are analogous to natural sub-components of biological neurons and demonstrate that intelligent behavior can be produced as a result of this additional biological plausibility. Our model allows neurons, dendrites, and axon branches to grow or die so that synaptic morphology can change and affect information processing while solving a computational problem. The compartmental model of a neuron consists of a collection of seven chromosomes encoding distinct computational functions inside the neuron. Since the equivalent computational functions of neural components are very complex and in some cases unknown, we have used a form of genetic programming known as Cartesian genetic programming (CGP) to obtain these functions. We start with a small random network of soma, dendrites, and neurites that develops during problem solving by repeatedly executing the seven chromosomal programs that have been found by evolution. We have evaluated the learning potential of this system in the context of a well-known single agent learning problem, known as Wumpus World. We also examined the harder problem of learning in a competitive environment for two antagonistic agents, in which both agents are controlled by independent CGP computational networks (CGPCN). Our results show that the agents exhibit interesting learning capabilities.  相似文献   

13.
Methods of stabilization as applied to Hopfield-type continuous neural networks with a unique equilibrium point are considered. These methods permit the design of stable networks where the elements of the interconnection matrix and nonlinear activation functions of separate neurons vary with time. For stabilization with a variable interconnection matrix it is suggested that a new second layer of neurons be introduced to the initial single-layer network and some additional connections be added between the new and old layers. This approach gives us a system with a unique equilibrium point that is globally asymptotically stable, i.e. the entire space serves as the domain of attraction of this point, and the stability does not depend on the interconnection matrix of the system. In the case of the variable activation functions, some results from a recent investigation of the absolute stability problem for neural networks are presented, along with some recommendations.  相似文献   

14.
In recent years, both multilayer perceptrons and networks of spiking neurons have been used in applications ranging from detailed models of specific cortical areas to image processing. A more challenging application is to find solutions to functional equations in order to gain insights to underlying phenomena. Finding the roots of real valued monotonically increasing function mappings is the solution to a particular class of functional equation. Furthermore, spiking neural network approaches in solving problems described by functional equations, may be an useful tool to provide important insights to how different regions of the brain may co-ordinate signaling within and between modalities, thus providing a possible basis to construct a theory of brain function. In this letter, we present for the first time a spiking neural network architecture based on integrate-and-fire units and delays, that is capable of calculating the functional or iterative root of nonlinear functions, by solving a particular class of functional equation.  相似文献   

15.
A model of a human neural knowledge processing system is presented that suggests the following. First, an entity in the outside world lends to be locally encoded in neural networks so that the conceptual information structure is mirrored in its physical implementation. Second, the knowledge of problem solving is implemented in a quite implicit way in the internal structure of the neural network (a functional group of associated hidden neurons and their connections to entity neurons) not in individual neurons or connections. Third, the knowledge system is organized and implemented in a modular fashion in neural networks according to the local specialization of problem solving where a module of neural network implements an inter-related group of knowledge such as a schema, and different modules have similar processing mechanisms, but differ in their input and output patterns. A neural network module can be tuned just as a schema structure can be adapted for changing environments. Three experiments were conducted to try to validate the suggested cognitive engineering based knowledge structure in neural networks through computer simulation. The experiments, which were based on a task of modulo arithmetic, provided some insights into the plausibility of the suggested model of a neural knowledge processing system.  相似文献   

16.
Abstract

A model of a human neural knowledge processing system is presented that suggests the following. First, an entity in the outside world lends to be locally encoded in neural networks so that the conceptual information structure is mirrored in its physical implementation. Second, the knowledge of problem solving is implemented in a quite implicit way in the internal structure of the neural network (a functional group of associated hidden neurons and their connections to entity neurons) not in individual neurons or connections. Third, the knowledge system is organized and implemented in a modular fashion in neural networks according to the local specialization of problem solving where a module of neural network implements an inter-related group of knowledge such as a schema, and different modules have similar processing mechanisms, but differ in their input and output patterns. A neural network module can be tuned just as a schema structure can be adapted for changing environments. Three experiments were conducted to try to validate the suggested cognitive engineering based knowledge structure in neural networks through computer simulation. The experiments, which were based on a task of modulo arithmetic, provided some insights into the plausibility of the suggested model of a neural knowledge processing system.  相似文献   

17.
基于函数正交基展开的过程神经网络学习算法   总被引:27,自引:1,他引:27  
过程神经网络的输入和连接权均可为时变函数,过程神经元增加了一个对于时间的聚合算子,使网络同时具有时空二维信息处理能力.该文在考虑过程神经网络对时间聚合运算的复杂性的基础上,提出了一种基于函数正交基展开的学习算法.在网络输入函数空间中选择一组适当的函数正交基,将输入函数和网络权函数都表示为该组正交基的展开形式,利用基函数的正交性.简化过程神经元对时间的聚合运算.应用表明,算法简化了过程神经网络的计算复杂度,提高了网络学习效率和对实际问题求解的适应性.以旋转机械故障诊断问题和油藏开发过程采收率的模拟为例验证了算法的有效性.  相似文献   

18.
区间小波神经网络(I)——理论与实现   总被引:16,自引:2,他引:16  
高协平  张钹 《软件学报》1998,9(3):217-221
本文提出了前馈神经网络学习的一种新理论——区间小波神经网络,不同于以往工作的是本工作的主要特点有:(1) 采用区间小波空间作为神经网络的学习基底空间,克服了以往神经网络基空间与被学习信号所属空间不匹配问题;(2) 由于采用区间小波理论,克服了原来被学习信号为适应神经网基空间而延拓所带来的不光滑性,使神经元数目得以节约,这在高维学习情形效果极为显著;(3) 神经单元所用活性函数不再为同一个函数.  相似文献   

19.
Functional Networks   总被引:19,自引:0,他引:19  
In this letter we present functional networks. Unlike neural networks, in these networks there are no weightsassociated with the links connecting neurons, and the internal neuron functions are not fixed but learnable. These functions are not arbitrary, but subject to strong constraints to satisfy the compatibility conditions imposed by the existence of multiple links going from the last input layer to the same output units. In fact, writing the values of the output units in different forms, by considering these different links, a system of functional equations is obtained. When this system is solved, the numberof degrees of freedom of these initially multidimensional functions is considerably reduced. One example illustrates the process and shows that multidimensional functions can be reduced to functions with a single argument. To learn the resulting functions, a method based on minimizing a least squares error function is used, which, unlike the functions used in neural networks, has a single minimum.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号