共查询到19条相似文献,搜索用时 234 毫秒
1.
张俊 《数字社区&智能家居》2012,(1X):673-676
泛函网络是神经网络的一般化推广,同神经网络一样,至今还没有系统设计方法能够对给定问题设计出近似最优的结构。鉴于此,利用熵聚类的思想来设计泛函网络,对网络每一神经元的基函数和泛函参数共存且相互影响的最优搜索来实现泛函网络结构和泛函参数的共同学习。提出一基于熵聚类思想来设计泛函网络的方法,有效地提高了泛函网络的收敛精度,并可获得更为合理的网络结构。 相似文献
2.
3.
基于遗传规划实现泛函网络神经元函数类型优化 总被引:1,自引:0,他引:1
泛函网络是最近提出的一种对神经网络的有效推广。与神经网络不同,它处理的是一般的泛函模型,其神经元函数不固定,而是可学习的,且在各个处理单元之间没有权值。同神经网络一样,至今还没有系统设计方法能够对给定问题设计出近似最优的结构。鉴于此,将整个泛函网络的设计分解为单个神经元的逐个设计;然后,在此框架下提出了基于遗传规划的单个神经元的设计方法,该方法可实现对神经元函数类型的优化。仿真实验表明,本方法是有效可行的,能用较小的网络规模获得更满意的泛化特性。 相似文献
4.
5.
6.
用构造性方法证明:对于给定的r阶多项式函数,可以具体地构造出一个三层泛函网络,以任意精度逼近该多项式,所构造的网络的中问神经元个数仅与多项式基函数的阶数r有关,并能用r表达.该文所得结果对于基于多项式基函数的泛函网络逼近任意函数类的网络具体构造和逼近具有理论指导意义. 相似文献
7.
8.
9.
基于泛函网络的结构特点和遗传规划的全局搜索能力,提出了广义基函数概念,通过改进遗传规划的编码方式对广义基函数进行学习,用最小二乘法设计适应度函数,从而确定泛函网络的最佳逼近结构模型。最后,4个数值仿真实例表明,该方法是有效可行的,具有较强的泛化特性。 相似文献
10.
11.
由经典的函数逼近理论衍生的很多数值算法有共同的缺点:计算量大、适应性差、对模型和数据要求高,在实际应用中受到限制。神经网络可以被用来计算复杂输入与输出结果之间的关系,故神经网络具有很强的函数逼近功能。该文给出了径向基函数网络(RBFNN)的结构及学习过程,重点阐述了RBFNN在函数逼近、求解非线性方程组以及散乱数据插值中的应用,结合MATLAB神经网络工具箱给出了数值实例,并与BP网络进行了比较。应用结果表明RBFNN是数值计算的一个有力工具,与传统方法比较具有编程简单、实用的特点,在工程和科学研究上若将其制成软件包则具有很好的使用价值。 相似文献
12.
Although artificial neural networks have taken their inspiration from natural neurological systems, they have largely ignored the genetic basis of neural functions. Indeed, evolutionary approaches have mainly assumed that neural learning is associated with the adjustment of synaptic weights. The goal of this paper is to use evolutionary approaches to find suitable computational functions that are analogous to natural sub-components of biological neurons and demonstrate that intelligent behavior can be produced as a result of this additional biological plausibility. Our model allows neurons, dendrites, and axon branches to grow or die so that synaptic morphology can change and affect information processing while solving a computational problem. The compartmental model of a neuron consists of a collection of seven chromosomes encoding distinct computational functions inside the neuron. Since the equivalent computational functions of neural components are very complex and in some cases unknown, we have used a form of genetic programming known as Cartesian genetic programming (CGP) to obtain these functions. We start with a small random network of soma, dendrites, and neurites that develops during problem solving by repeatedly executing the seven chromosomal programs that have been found by evolution. We have evaluated the learning potential of this system in the context of a well-known single agent learning problem, known as Wumpus World. We also examined the harder problem of learning in a competitive environment for two antagonistic agents, in which both agents are controlled by independent CGP computational networks (CGPCN). Our results show that the agents exhibit interesting learning capabilities. 相似文献
13.
Evgeny E. Dudnikov 《控制论与系统》2013,44(4):325-340
Methods of stabilization as applied to Hopfield-type continuous neural networks with a unique equilibrium point are considered. These methods permit the design of stable networks where the elements of the interconnection matrix and nonlinear activation functions of separate neurons vary with time. For stabilization with a variable interconnection matrix it is suggested that a new second layer of neurons be introduced to the initial single-layer network and some additional connections be added between the new and old layers. This approach gives us a system with a unique equilibrium point that is globally asymptotically stable, i.e. the entire space serves as the domain of attraction of this point, and the stability does not depend on the interconnection matrix of the system. In the case of the variable activation functions, some results from a recent investigation of the absolute stability problem for neural networks are presented, along with some recommendations. 相似文献
14.
In recent years, both multilayer perceptrons and networks of spiking neurons have been used in applications ranging from detailed models of specific cortical areas to image processing. A more challenging application is to find solutions to functional equations in order to gain insights to underlying phenomena. Finding the roots of real valued monotonically increasing function mappings is the solution to a particular class of functional equation. Furthermore, spiking neural network approaches in solving problems described by functional equations, may be an useful tool to provide important insights to how different regions of the brain may co-ordinate signaling within and between modalities, thus providing a possible basis to construct a theory of brain function. In this letter, we present for the first time a spiking neural network architecture based on integrate-and-fire units and delays, that is capable of calculating the functional or iterative root of nonlinear functions, by solving a particular class of functional equation. 相似文献
15.
A model of a human neural knowledge processing system is presented that suggests the following. First, an entity in the outside world lends to be locally encoded in neural networks so that the conceptual information structure is mirrored in its physical implementation. Second, the knowledge of problem solving is implemented in a quite implicit way in the internal structure of the neural network (a functional group of associated hidden neurons and their connections to entity neurons) not in individual neurons or connections. Third, the knowledge system is organized and implemented in a modular fashion in neural networks according to the local specialization of problem solving where a module of neural network implements an inter-related group of knowledge such as a schema, and different modules have similar processing mechanisms, but differ in their input and output patterns. A neural network module can be tuned just as a schema structure can be adapted for changing environments. Three experiments were conducted to try to validate the suggested cognitive engineering based knowledge structure in neural networks through computer simulation. The experiments, which were based on a task of modulo arithmetic, provided some insights into the plausibility of the suggested model of a neural knowledge processing system. 相似文献
16.
《Behaviour & Information Technology》2012,31(5):403-418
Abstract A model of a human neural knowledge processing system is presented that suggests the following. First, an entity in the outside world lends to be locally encoded in neural networks so that the conceptual information structure is mirrored in its physical implementation. Second, the knowledge of problem solving is implemented in a quite implicit way in the internal structure of the neural network (a functional group of associated hidden neurons and their connections to entity neurons) not in individual neurons or connections. Third, the knowledge system is organized and implemented in a modular fashion in neural networks according to the local specialization of problem solving where a module of neural network implements an inter-related group of knowledge such as a schema, and different modules have similar processing mechanisms, but differ in their input and output patterns. A neural network module can be tuned just as a schema structure can be adapted for changing environments. Three experiments were conducted to try to validate the suggested cognitive engineering based knowledge structure in neural networks through computer simulation. The experiments, which were based on a task of modulo arithmetic, provided some insights into the plausibility of the suggested model of a neural knowledge processing system. 相似文献
17.
基于函数正交基展开的过程神经网络学习算法 总被引:27,自引:1,他引:27
过程神经网络的输入和连接权均可为时变函数,过程神经元增加了一个对于时间的聚合算子,使网络同时具有时空二维信息处理能力.该文在考虑过程神经网络对时间聚合运算的复杂性的基础上,提出了一种基于函数正交基展开的学习算法.在网络输入函数空间中选择一组适当的函数正交基,将输入函数和网络权函数都表示为该组正交基的展开形式,利用基函数的正交性.简化过程神经元对时间的聚合运算.应用表明,算法简化了过程神经网络的计算复杂度,提高了网络学习效率和对实际问题求解的适应性.以旋转机械故障诊断问题和油藏开发过程采收率的模拟为例验证了算法的有效性. 相似文献
18.
区间小波神经网络(I)——理论与实现 总被引:16,自引:2,他引:16
本文提出了前馈神经网络学习的一种新理论——区间小波神经网络,不同于以往工作的是本工作的主要特点有:(1) 采用区间小波空间作为神经网络的学习基底空间,克服了以往神经网络基空间与被学习信号所属空间不匹配问题;(2) 由于采用区间小波理论,克服了原来被学习信号为适应神经网基空间而延拓所带来的不光滑性,使神经元数目得以节约,这在高维学习情形效果极为显著;(3) 神经单元所用活性函数不再为同一个函数. 相似文献
19.
Functional Networks 总被引:19,自引:0,他引:19
In this letter we present functional networks. Unlike neural networks, in these networks there are no weightsassociated with the links connecting neurons, and the internal neuron functions are not fixed but learnable. These functions are not arbitrary, but subject to strong constraints to satisfy the compatibility conditions imposed by the existence of multiple links going from the last input layer to the same output units. In fact, writing the values of the output units in different forms, by considering these different links, a system of functional equations is obtained. When this system is solved, the numberof degrees of freedom of these initially multidimensional functions is considerably reduced. One example illustrates the process and shows that multidimensional functions can be reduced to functions with a single argument. To learn the resulting functions, a method based on minimizing a least squares error function is used, which, unlike the functions used in neural networks, has a single minimum. 相似文献