首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 828 毫秒
1.
随机混沌时滞神经网络的指数同步   总被引:1,自引:1,他引:0  
研究受随机扰动且具有时变时滞神经网络的指数同步. 根据Lyapunov稳定性理论结合线性矩阵不等式技巧, 通过构造含时滞的状态反馈控制器, 使得受到随机扰动的驱动系统和响应系统达到指数同步, 给出了随机时滞神经网络指数同步的新判据, 最后通过仿真验证了所用方法的有效性.  相似文献   

2.
非线性随机时滞系统族的鲁棒稳定性   总被引:4,自引:0,他引:4  
沈轶  廖晓昕 《自动化学报》1999,25(4):537-542
研究了不确定性的一族非线性随机时滞系统的指数稳定性,建立了这种系统的均 方指数稳定和几乎必然指数稳定的时滞相关的充分准则;然后应用这些充分条件到一类不确 定性的随机时滞神经网络,得到了这种神经网络指数稳定的实用判据.最后一个数值例子说 明所给准则的有效性.  相似文献   

3.
随机时滞神经网络的全局指数稳定性   总被引:2,自引:0,他引:2  
首先对一般随机系统的渐近特性进行了讨论.然后结合神经网络的特点,应用李雅普诺夫第二方法对一类随机时滞神经网络系统的全局指数稳定性进行了分析,给出了易于判定随机时滞神经网络几乎必然指数稳定性新的代数判据,并给出实例进行仿真实验.  相似文献   

4.
非线性随机时滞系统的稳定性与应用   总被引:1,自引:1,他引:0  
首先考虑了不确定性的一族非线笥随机时滞系统,建立了这种系统的均方指数稳定与几乎必须指数稳定的充分准则,其准则是时滞无关的,然后应用这些充分条件到一类不确定的随机时滞神经网络,得到了这咱神经网络指数稳定的产用判据。本文的结果是最近文献中某些结果的推广,最后一个数值例子说明的所给准则的有效性。  相似文献   

5.
讨论了一类广义时变时滞递归神经网络的平衡点的存在性、唯一性和全局指数稳定性。这个神经网络模型包括时滞Hopfield神经网络,时滞Cellular神经网络,时滞Cohen-Grossberg神经网络作为特例。基于微分不等式技术,利用Brouwer不动点定理并构造合适的Lyapunov函数,得到了保证递归神经网络的平衡点存在、唯一、全局指数稳定的新的充分条件。新的充分条件不要求激励函数的可微性、有界性和单调性,同时减少了对限制条件的要求。两个仿真例子表明了所得结果的有效性。  相似文献   

6.
当神经网络应用于最优化计算时,理想的情形是只有一个全局渐近稳定的平衡点,并且以指数速度趋近于平衡点,从而减少神经网络所需计算时间.研究了带时变时滞的递归神经网络的全局渐近稳定性.首先将要研究的模型转化为描述系统模型,然后利用Lyapunov-Krasovskii稳定性定理、线性矩阵不等式(LMI)技术、S过程和代数不等式方法,得到了确保时变时滞递归神经网络渐近稳定性的新的充分条件,并将它应用于常时滞神经网络和时滞细胞神经网络模型,分别得到了相应的全局渐近稳定性条件.理论分析和数值模拟显示,所得结果为时滞递归神经网络提供了新的稳定性判定准则.  相似文献   

7.
脉冲时滞Hopfield神经网络的全局指数稳定性   总被引:1,自引:0,他引:1  
研究一类具有脉冲控制的时滞Hopfideld神经网络的全局指数稳定性,通过Lyapunov-Krasovskii稳定性理论和Halanay不等式等方法,构造合适的Lyapunov泛函,利用不等式技巧得到了确保时滞神经网络在脉冲控制下全局指数稳定的一个充分条件,保证了Hofidd神经网络在脉冲控制下的全局指数稳定,并估计了系统的指数收敛率.为了便于计算和验证结论的有效性,给出一个简化的充分条件.最后通过数值实例的实验仿真证实了结论的有效性、可行性.  相似文献   

8.
针对一类具有区间时滞和随机干扰的BAM神经网络的全局渐近稳定性问题,通过构造合适的Lyapunov-Krasovskii泛函,应用随机分析和自由权值矩阵方法,并考虑时滞区间范围,得到了新的稳定性充分条件。该条件能够保证时滞BAM神经网络在均方意义下是全局渐近稳定的,同时适用于快时滞和慢时滞,其适用范围更广。最后,通过一个仿真实例证明了定理的有效性。  相似文献   

9.
针对一类具有区间时滞和随机干扰的BAM神经网络的全局渐近稳定性问题,通过构造合适的Lyapunov-Krasovskii泛函,应用随机分析和自由权值矩阵方法,并考虑时滞区间范围,得到了新的稳定性充分条件。该条件能够保证时滞BAM神经网络在均方意义下是全局渐近稳定的,同时适用于快时滞和慢时滞,其适用范围更广。最后,通过一个仿真实例证明了定理的有效性。  相似文献   

10.
随机细胞神经网络平衡点均方指数稳定性分析   总被引:1,自引:0,他引:1  
主要利用Lyapunov 泛函方法研究带脉冲的随机时滞神经网络平衡点的均方指数稳定性。主要借助于不等式,随机分析理论给出主要结果。最后给出一数值算例证明结果的有效性。  相似文献   

11.
In this paper, we study the impulsive stochastic Cohen–Grossberg neural networks with mixed delays. By establishing an L-operator differential inequality with mixed delays and using the properties of M-cone and stochastic analysis technique, we obtain some sufficient conditions ensuring the exponential p-stability of the impulsive stochastic Cohen–Grossberg neural networks with mixed delays. These results generalize a few previous known results and remove some restrictions on the neural networks. Two examples are also discussed to illustrate the efficiency of the obtained results.  相似文献   

12.
This paper considers a stochastic neural network (SNN) with infinite delay. Some sufficient conditions for stochastic stability, stochastic asymptotical stability and global stochastic asymptotical stability, respectively, are derived by means of Lyapunov method, Itô formula and some inequalities. As a corollary, we show that if the neural network with infinite delay is stable under some conditions, then the stochastic stability is maintained provided the environmental noises are small. Estimates on the allowable sizes of environmental noises are also given. Finally, a three-dimensional SNN with infinite delay is analyzed and some numerical simulations are illustrated to show our results.  相似文献   

13.
Recurrent neural networks have been successfully used for analysis and prediction of temporal sequences. This paper is concerned with the convergence of a gradient-descent learning algorithm for training a fully recurrent neural network. In literature, stochastic process theory has been used to establish some convergence results of probability nature for the on-line gradient training algorithm, based on the assumption that a very large number of (or infinitely many in theory) training samples of the temporal sequences are available. In this paper, we consider the case that only a limited number of training samples of the temporal sequences are available such that the stochastic treatment of the problem is no longer appropriate. Instead, we use an off-line gradient training algorithm for the fully recurrent neural network, and we accordingly prove some convergence results of deterministic nature. The monotonicity of the error function in the iteration is also guaranteed. A numerical example is given to support the theoretical findings.  相似文献   

14.
The ability of a neural network to learn from experience can be viewed as closely related to its approximating properties. By assuming that environment is essentially stochastic it follows that neural networks should be able to approximate stochastic processes. The aim of this paper is to show that some classes of artificial neural networks exist such that they are capable of providing the approximation, in the mean square sense, of prescribed stochastic processes with arbitrary accuracy. The networks so defined constitute a new model for neural processing and extend previous results concerning approximating capabilities of artificial neural networks.  相似文献   

15.
In this paper, the exponential stabilization problem is investigated for a class of memristive time‐varying delayed neural networks with stochastic disturbance via periodically intermittent state feedback control. First, a periodically intermittent state feedback control rule is designed for the exponential stabilization of stochastic memristive time‐varying delayed neural networks. Then, by adopting appropriate Lyapunov‐Krasovskii functionals in light of the Lyapunov stability theory, some novel stabilization criteria are obtained to guarantee exponential stabilization of stochastic memristive time‐varying delayed neural networks via periodically intermittent state feedback control. Compared with existing results on stabilization of stochastic memristive time‐varying delayed neural networks, the obtained stabilization criteria in this paper are not difficult to verify. Finally, an illustrative example is given to illustrate the validity of the obtained results.  相似文献   

16.
In this paper, a class of stochastic impulsive high-order BAM neural networks with time-varying delays is considered. By using Lyapunov functional method, LMI method and mathematics induction, some sufficient conditions are derived for the globally exponential stability of the equilibrium point of the neural networks in mean square. It is believed that these results are significant and useful for the design and applications of impulsive stochastic high-order BAM neural networks.  相似文献   

17.
First, we establish the stochastic LaSalle theorem for stochastic infinite delay differential equations with Markovian switching, from which some criterias on attraction are obtained. Then, by employing Lyapunov method and LaSalle-type theorem established above, we obtain some sufficient conditions ensuring the attractor and stochastic boundedness for stochastic infinite delay neural networks with Markovian switching. Finally, an example is also discussed to illustrate the efficiency of the obtained results.  相似文献   

18.
This paper is concerned with the robust delay-dependent exponential stability of uncertain stochastic neural networks (SNNs) with mixed delays. Based on a novel Lyapunov-Krasovskii functional method, some new delay-dependent stability conditions are presented in terms of linear matrix inequalities, which guarantee the uncertain stochastic neural networks with mixed delays to be robustly exponentially stable. Numerical examples are given to illustrate the effectiveness of our results.  相似文献   

19.
The learning capability of neural networks is equivalent to modeling physical events that occur in the real environment. Several early works have demonstrated that neural networks belonging to some classes are universal approximators of input-output deterministic functions. Recent works extend the ability of neural networks in approximating random functions using a class of networks named stochastic neural networks (SNN). In the language of system theory, the approximation of both deterministic and stochastic functions falls within the identification of nonlinear no-memory systems. However, all the results presented so far are restricted to the case of Gaussian stochastic processes (SPs) only, or to linear transformations that guarantee this property. This paper aims at investigating the ability of stochastic neural networks to approximate nonlinear input-output random transformations, thus widening the range of applicability of these networks to nonlinear systems with memory. In particular, this study shows that networks belonging to a class named non-Gaussian stochastic approximate identity neural networks (SAINNs) are capable of approximating the solutions of large classes of nonlinear random ordinary differential transformations. The effectiveness of this approach is demonstrated and discussed by some application examples.  相似文献   

20.
This paper deals with the problems of the global exponential stability and stabilization for a class of uncertain discrete-time stochastic neural networks with interval time-varying delay. By using the linear matrix inequality method and the free-weighting matrix technique, we construct a new Lyapunov–Krasovskii functional and establish new sufficient conditions to guarantee that the uncertain discrete-time stochastic neural networks with interval time-varying delay are globally exponential stable in the mean square. Furthermore, we extend our consideration to the stabilization problem for a class of discrete-time stochastic neural networks. Based on the state feedback control law, some novel delay-dependent criteria of the robust exponential stabilization for a class of discrete-time stochastic neural networks with interval time-varying delay are established. The controller gains are designed to ensure the global robust exponential stability of the closed-loop systems. Finally, numerical examples illustrate the effectiveness of the theoretical results we have obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号