首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Two classes of high-performance neural networks for solving linear and quadratic programming problems are given. We prove that the new system converges globally to the solutions of the linear and quadratic programming problems. In a neural network, network parameters are usually not specified. The proposed models can overcome numerical difficulty caused by neural networks with network parameters and obtain desired approximate solutions of the linear and quadratic programming problems.  相似文献   

2.
Presents a new neural network which improves existing neural networks for solving general linear programming problems. The network, without setting parameter, uses only simple hardware in which no analog multipliers are required, and is proved to be completely stable to the exact solutions. Moreover, using this network the author can solve linear programming problems and its dual simultaneously, and cope with problems with nonunique solutions whose set is allowed to be unbounded.  相似文献   

3.
A new neural network for solving linear and quadratic programming problems.   总被引:12,自引:0,他引:12  
A new neural network for solving linear and quadratic programming problems is presented and is shown to be globally convergent. The new neural network improves existing neural networks for solving these problems: it avoids the parameter turning problem, it is capable of achieving the exact solutions, and it uses only simple hardware in which no analog multipliers for variables are required. Furthermore, the network solves both the primal problems and their dual problems simultaneously.  相似文献   

4.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

5.
In recent years, artificial neural networks have attracted considerable attention as candidates for novel computational systems. Computer scientists and engineers are developing neural networks as representational and computational models for problem solving: neural networks are expected to produce new solutions or alternatives to existing models. This paper demonstrates the flexibility of neural networks for modeling and solving diverse mathematical problems including Taylor series expansion, Weierstrass's first approximation theorem, linear programming with single and multiple objectives, and fuzzy mathematical programming. Neural network representations of such mathematical problems may make it possible to overcome existing limitations, to find new solutions or alternatives to existing models, and to achieve synergistic effects through hybridization.  相似文献   

6.
Neural network for solving extended linear programming problems.   总被引:5,自引:0,他引:5  
A neural network for solving extended linear programming problems is presented and is shown to be globally convergent to exact solutions. The proposed neural network only uses simple hardware in which no analog multiplier for variables is required, and has no parameter tuning problem. Finally, an application of the neural network to the L(1 )-norm minimization problem is given.  相似文献   

7.
A two-phase optimization neural network   总被引:4,自引:0,他引:4  
A novel two-phase neural network that is suitable for solving a large class of constrained or unconstrained optimization problem is presented. For both types of problems with solutions lying in the interior of the feasible regions, the phase-one structure of the network alone is sufficient. When the solutions of constrained problems are on the boundary of the feasible regions, the proposed two-phase network is capable of achieving the exact solutions, in contrast to existing optimization neural networks which can obtain only approximate solutions. Furthermore, the network automatically provides the corresponding Lagrange multiplier associated with each constraint. Thus, for linear programming, the network solves both the primal problems and their dual problems simultaneously.  相似文献   

8.
田大钢 《自动化学报》2003,29(2):219-226
通过一种新的对偶形式,得到一种新的易于实现的解线性规划问题的神经网络,证明了 网络具有全局指数收敛性,使得线性规划问题的神经网络解法趋于完善.  相似文献   

9.
Most existing neural networks for solving linear variational inequalities (LVIs) with the mapping Mx + p require positive definiteness (or positive semidefiniteness) of M. In this correspondence, it is revealed that this condition is sufficient but not necessary for an LVI being strictly monotone (or monotone) on its constrained set where equality constraints are present. Then, it is proposed to reformulate monotone LVIs with equality constraints into LVIs with inequality constraints only, which are then possible to be solved by using some existing neural networks. General projection neural networks are designed in this correspondence for solving the transformed LVIs. Compared with existing neural networks, the designed neural networks feature lower model complexity. Moreover, the neural networks are guaranteed to be globally convergent to solutions of the LVI under the condition that the linear mapping Mx + p is monotone on the constrained set. Because quadratic and linear programming problems are special cases of LVI in terms of solutions, the designed neural networks can solve them efficiently as well. In addition, it is discovered that the designed neural network in a specific case turns out to be the primal-dual network for solving quadratic or linear programming problems. The effectiveness of the neural networks is illustrated by several numerical examples.  相似文献   

10.
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.  相似文献   

11.
Neural Network(NN) is well-known as one of powerful computing tools to solve optimization problems. Due to the massive computing unit-neurons and parallel mechanism of neural network approach we can solve the large-scale problem efficiently and optimal solution can be gotten. In this paper, we intoroduce improvement of the two-phase approach for solving fuzzy multiobjectve linear programming problem with both fuzzy objectives and constraints and we propose a new neural network technique for solving fuzzy multiobjective linear programming problems. The procedure and efficiency of this approach are shown with numerical simulations.  相似文献   

12.
Fuzzy random programming with equilibrium chance constraints   总被引:7,自引:0,他引:7  
To model fuzzy random decision systems, this paper first defines three kinds of equilibrium chances via fuzzy integrals in the sense of Sugeno. Then a new class of fuzzy random programming problems is presented based on equilibrium chances. Also, some convex theorems about fuzzy random linear programming problems are proved, the results provide us methods to convert primal fuzzy random programming problems to their equivalent stochastic convex programming ones so that both the primal problems and their equivalent problems have the same optimal solutions and the techniques developed for stochastic convex programming can apply. After that, a solution approach, which integrates simulations, neural network and genetic algorithm, is suggested to solve general fuzzy random programming problems. At the end of this paper, three numerical examples are provided. Since the equivalent stochastic programming problems of the three examples are very complex and nonconvex, the techniques of stochastic programming cannot apply. In this paper, we solve them by the proposed hybrid intelligent algorithm. The results show that the algorithm is feasible and effectiveness.  相似文献   

13.
We consider some general and practical cases of conflicts and design a neural network that is able to solve these basic conflict problems. For the preliminary definition and concepts in game theory, neural network and optimization, such as pay off function, stable solutions, and linear programming see, Fundenberg and Tirole (Game theory, Massachusetts Institute of Technology, Cambridge, 1996), Gass (Linear programming, McGraw Hill, New York, 1958), Hertz et al. (Introduction to the theory of neural computation, Addison Wesley Company, Redwood City, 1991).  相似文献   

14.
In this paper, a one-layer recurrent neural network with a discontinuous hard-limiting activation function is proposed for quadratic programming. This neural network is capable of solving a large class of quadratic programming problems. The state variables of the neural network are proven to be globally stable and the output variables are proven to be convergent to optimal solutions as long as the objective function is strictly convex on a set defined by the equality constraints. In addition, a sequential quadratic programming approach based on the proposed recurrent neural network is developed for general nonlinear programming. Simulation results on numerical examples and support vector machine (SVM) learning show the effectiveness and performance of the neural network.  相似文献   

15.
A neural network approach to job-shop scheduling   总被引:6,自引:0,他引:6  
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.  相似文献   

16.
In this paper we study three different classes of neural network models for solving linear programming problems. We investigate the following characteristics of each model: model complexity, complexity of individual neurons, and accuracy of solutions. Simulation examples are given to illustrate the dynamical behavior of each model.  相似文献   

17.
In this paper linear and quadratic programming problems are solved using a novel recurrent artificial neural network. The new model is simpler and converges very fast to the exact primal and dual solutions simultaneously. The model is based on a nonlinear dynamical system, using arbitrary initial conditions. In order to construct an economy model, here we avoid using analog multipliers. The dynamical system is a time dependent system of equations with the gradient of specific Lyapunov energy function in the right hand side. Block diagram of the proposed neural network model is given. Fourth order Runge–Kutta method with controlled step size is used to solve the problem numerically. Global convergence of the new model is proved, both theoretically and numerically. Numerical simulations show the fast convergence of the new model for the problems with a unique solution or infinitely many. This model converges to the exact solution independent of the way that we may choose the starting points, i.e. inside, outside or on the boundaries of the feasible region.  相似文献   

18.
求解混合约束非线性规划的神经网络模型   总被引:1,自引:0,他引:1  
陶卿  任富兴  孙德敏 《软件学报》2002,13(2):304-310
通过巧妙构造Liapunov函数,提出一种大范围收敛的求解优化问题的连续神经网络模型.它具有良好的功能和性能,可以求解具有等式和不等式约束的非线性规划问题.该模型是Newton最速下降法对约束问题的推广,能有效地提高解的精度.即使对正定二次规划问题,它也比现有的模型结构简单.  相似文献   

19.
In this paper, a new recurrent neural network is proposed for solving convex quadratic programming (QP) problems. Compared with existing neural networks, the proposed one features global convergence property under weak conditions, low structural complexity, and no calculation of matrix inverse. It serves as a competitive alternative in the neural network family for solving linear or quadratic programming problems. In addition, it is found that by some variable substitution, the proposed network turns out to be an existing model for solving minimax problems. In this sense, it can be also viewed as a special case of the minimax neural network. Based on this scheme, a k-winners-take-all (k-WTA) network with O(n) complexity is designed, which is characterized by simple structure, global convergence, and capability to deal with some ill cases. Numerical simulations are provided to validate the theoretical results obtained. More importantly, the network design method proposed in this paper has great potential to inspire other competitive inventions along the same line.  相似文献   

20.
In this paper, we present a delayed neural network approach to solve linear projection equations. The Lyapunov-Krasovskii theory for functional differential equations and the linear matrix inequality (LMI) approach are employed to analyze the global asymptotic stability and global exponential stability of the delayed neural network. Compared with the existing linear projection neural network, theoretical results and illustrative examples show that the delayed neural network can effectively solve a class of linear projection equations and some quadratic programming problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号