首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigate the qualitative properties of a recurrent neural network (RNN) for minimizing a nonlinear continuously differentiable and convex objective function over any given nonempty, closed, and convex subset which may be bounded or unbounded, by exploiting some key inequalities in mathematical programming. The global existence and boundedness of the solution of the RNN are proved when the objective function is convex and has a nonempty constrained minimum set. Under the same assumption, the RNN is shown to be globally convergent in the sense that every trajectory of the RNN converges to some equilibrium point of the RNN. If the objective function itself is uniformly convex and its gradient vector is a locally Lipschitz continuous mapping, then the RNN is globally exponentially convergent in the sense that every trajectory of the RNN converges to the unique equilibrium point of the RNN exponentially. These qualitative properties of the RNN render the network model well suitable for solving the convex minimization over any given nonempty, closed, and convex subset, no matter whether the given constrained subset is bounded or not.  相似文献   

2.
提出了解决一类带等式与不等式约束的非光滑非凸优化问题的神经网络模型。证明了当目标函数有下界时,神经网络的解轨迹在有限时间收敛到可行域。同时,神经网络的平衡点集与优化问题的关键点集一致,且神经网络最终收敛于优化问题的关键点集。与传统基于罚函数的神经网络模型不同,提出的模型无须计算罚因子。最后,通过仿真实验验证了所提出模型的有效性。  相似文献   

3.
We investigate the qualitative properties of a recurrent neural network (RNN) for solving the general monotone variational inequality problems (VIPs), defined over a nonempty closed convex subset, which are assumed to have a nonempty solution set but need not be symmetric. The equilibrium equation of the RNN system simply coincides with the nonlinear projection equation of the VIP to be solved. We prove that the RNN system has a global and bounded solution trajectory starting at any given initial point in the above closed convex subset which is positive invariant for the RNN system. For general monotone VIPs, we show by an example that the trajectory of the RNN system can converge to a limit cycle rather than an equilibrium in the case that the monotone VIPs are not symmetric. Contrary to this, for the strictly monotone VIPs, it is shown that every solution trajectory of the RNN system starting from the above closed convex subset converges to the unique equilibrium which is also locally asymptotically stable in the sense of Lyapunov, no matter whether the VIPs are symmetric or nonsymmetric. For the uniformly monotone VIPs, the aforementioned solution trajectory of the RNN system converges to the unique equilibrium exponentially.  相似文献   

4.
1 Introduction Optimization problems arise in a broad variety of scientific and engineering applica- tions. For many practice engineering applications problems, the real-time solutions of optimization problems are mostly required. One possible and very pr…  相似文献   

5.
We propose a general recurrent neural-network (RNN) model for nonlinear optimization over a nonempty compact convex subset which includes the bound subset and spheroid subset as special cases. It is shown that the compact convex subset is a positive invariant and attractive set of the RNN system and that all the network trajectories starting from the compact convex subset converge to the equilibrium set of the RNN system. The above equilibrium set of the RNN system coincides with the optimum set of the minimization problem over the compact convex subset when the objective function is convex. The analysis of these qualitative properties for the RNN model is conducted by employing the properties of the projection operator of Euclidean space onto the general nonempty closed convex subset. A numerical simulation example is also given to illustrate the qualitative properties of the proposed general RNN model for solving an optimization problem over various compact convex subsets.  相似文献   

6.
In this paper, we propose a recurrent neural network for solving nonlinear convex programming problems with linear constraints. The proposed neural network has a simpler structure and a lower complexity for implementation than the existing neural networks for solving such problems. It is shown here that the proposed neural network is stable in the sense of Lyapunov and globally convergent to an optimal solution within a finite time under the condition that the objective function is strictly convex. Compared with the existing convergence results, the present results do not require Lipschitz continuity condition on the objective function. Finally, examples are provided to show the applicability of the proposed neural network.  相似文献   

7.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

8.
This paper presents a gradient neural network model for solving convex nonlinear programming (CNP) problems. The main idea is to convert the CNP problem into an equivalent unconstrained minimization problem with objective energy function. A gradient model is then defined directly using the derivatives of the energy function. It is also shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. It is also found that a larger scaling factor leads to a better convergence rate of the trajectory. The validity and transient behavior of the neural network are demonstrated by using various examples.  相似文献   

9.
A neural network for solving convex nonlinear programming problems is proposed in this paper. The distinguishing features of the proposed network are that the primal and dual problems can be solved simultaneously, all necessary and sufficient optimality conditions are incorporated, and no penalty parameter is involved. Based on Lyapunov, LaSalle and set stability theories, we prove strictly an important theoretical result that, for an arbitrary initial point, the trajectory of the proposed network does converge to the set of its equilibrium points, regardless of whether a convex nonlinear programming problem has unique or infinitely many optimal solutions. Numerical simulation results also show that the proposed network is feasible and efficient. In addition, a general method for transforming non-linear programming problems into unconstrained problems is also proposed. ID="A1" Correspondence and offprint requests to: Dr Z Chen, Department of Electronic Engineering, Brunel University, Uxbridge, Middle-sex, UK  相似文献   

10.
为寻求满足约束条件的优化问题的最优解,针对目标函数是非李普西茨函数,可行域由线性不等式或非线性不等式约束函数组成的区域的优化问题,构造了一种光滑神经网络模型。此模型通过引进光滑逼近技术将目标函数由非光滑函数转换成相应的光滑函数以及结合惩罚函数方法所构造而成。通过详细的理论分析证明了不论初始点在可行域内还是在可行域外,光滑神经网络的解都具有一致有界性和全局性,以及光滑神经网络的任意聚点都是原始优化问题的稳定点等结论。最后通过几个简单的仿真实验证明了理论的正确性。  相似文献   

11.
针对带有不等式约束条件的非光滑伪凸优化问题,提出了一种基于微分包含理论的新型递归神经网络模型,根据目标函数与约束条件设计出随着状态向量变化而变化的罚函数,使得神经网络的状态向量始终朝着可行域方向运动,确保神经网络状态向量可在有限时间内进入可行域,最终收敛到原始优化问题的最优解。最后,用两个仿真实验用来验证神经网络的有效性与准确性。与现有神经网络相比,它是一种新型的神经网络模型,模型结构简单,无需计算精确的罚因子,最重要的是无需可行域有界。  相似文献   

12.
In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.  相似文献   

13.
In this paper, a neural network model is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory and LaSalle invariance principle to solve general convex nonlinear programming (GCNLP) problems. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the GCNLP problem. By employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact optimal solution of the original problem. The simulation results also show that the proposed neural network is feasible and efficient.  相似文献   

14.
Neural network for quadratic optimization with bound constraints   总被引:20,自引:0,他引:20  
A recurrent neural network is presented which performs quadratic optimization subject to bound constraints on each of the optimization variables. The network is shown to be globally convergent, and conditions on the quadratic problem and the network parameters are established under which exponential asymptotic stability is achieved. Through suitable choice of the network parameters, the system of differential equations governing the network activations is preconditioned in order to reduce its sensitivity to noise and to roundoff errors. The optimization method employed by the neural network is shown to fall into the general class of gradient methods for constrained nonlinear optimization and, in contrast with penalty function methods, is guaranteed to yield only feasible solutions.  相似文献   

15.
研究了广义特征根问题求解的神经网络方法,给出了求解该问题的一个时间连续性反馈网络模型,利用LaSalle不变原理分析并证明了该网络的拟全局收敛性,这是网络能够确切的求解广义特征根问题的保证.同时,该网络解决了已有的基于罚函数方法构造的特征根问题的神经网络存在的一些基本缺陷:其一,基于罚函数的网络模型所得到的解可能不是真解,甚至可能都不是可行解;其二,它们的共同缺陷是有一个需要调节的参数,但是参数的选择并没有一个可供参考的准则;其三,这些模型的稳定性无法保证.本文所提出的网络模型解决了这些问题,并且,此网络具有一个很好的特征就是在初始点选定在问题的可行解集的话,网络轨线将永远是可行的并收敛到一个广义特征向量.最后,数值模拟也表明这里所提出的网络的可靠性能,进一步证明了此网络可以很好地求解广义特征根问题.  相似文献   

16.
In this paper, a recurrent neural network for both convex and nonconvex equality-constrained optimization problems is proposed, which makes use of a cost gradient projection onto the tangent space of the constraints. The proposed neural network constructs a generically nonfeasible trajectory, satisfying the constraints only as t rarr infin. Local convergence results are given that do not assume convexity of the optimization problem to be solved. Global convergence results are established for convex optimization problems. An exponential convergence rate is shown to hold both for the convex case and the nonconvex case. Numerical results indicate that the proposed method is efficient and accurate.  相似文献   

17.
This paper presents a recurrent neural network for solving nonconvex nonlinear optimization problems subject to nonlinear inequality constraints. First, the p-power transformation is exploited for local convexification of the Lagrangian function in nonconvex nonlinear optimization problem. Next, the proposed neural network is constructed based on the Karush–Kuhn–Tucker (KKT) optimality conditions and the projection function. An important property of this neural network is that its equilibrium point corresponds to the optimal solution of the original problem. By utilizing an appropriate Lyapunov function, it is shown that the proposed neural network is stable in the sense of Lyapunov and convergent to the global optimal solution of the original problem. Also, the sensitivity of the convergence is analysed by changing the scaling factors. Compared with other existing neural networks for such problem, the proposed neural network has more advantages such as high accuracy of the obtained solutions, fast convergence, and low complexity. Finally, simulation results are provided to show the benefits of the proposed model, which compare to or outperform existing models.  相似文献   

18.
A novel neural network for nonlinear convex programming   总被引:5,自引:0,他引:5  
In this paper, we present a neural network for solving the nonlinear convex programming problem in real time by means of the projection method. The main idea is to convert the convex programming problem into a variational inequality problem. Then a dynamical system and a convex energy function are constructed for resulting variational inequality problem. It is shown that the proposed neural network is stable in the sense of Lyapunov and can converge to an exact optimal solution of the original problem. Compared with the existing neural networks for solving the nonlinear convex programming problem, the proposed neural network has no Lipschitz condition, no adjustable parameter, and its structure is simple. The validity and transient behavior of the proposed neural network are demonstrated by some simulation results.  相似文献   

19.
Extremum-seeking control of state-constrained nonlinear systems   总被引:2,自引:0,他引:2  
An extremum-seeking control problem is posed for a class of nonlinear systems with unknown dynamical parameters, whose states are subject to convex, pointwise inequality constraints. Using a barrier function approach, an adaptive method is proposed for generating setpoints online which converge to the feasible minimizer of a convex objective function containing the unknown dynamic parameters. A tracking controller regulates system states to the generated setpoint via state feedback, while maintaining feasibility of the state constraints. A simulation example demonstrates application of the method.  相似文献   

20.
In this paper, a new neural network was presented for solving nonlinear convex programs with linear constrains. Under the condition that the objective function is convex, the proposed neural network is shown to be stable in the sense of Lyapunov and globally converges to the optimal solution of the original problem. Several numerical examples show the effectiveness of the proposed neural network.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号