首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
This paper deals with the infinite horizon linear quadratic(LQ)differential games for discrete-time stochastic systems with both state and control dependent noise.The Popov-Belevitch-Hautus(PBH)criteria for exact observability and exact detectability of discrete-time stochastic systems are presented.By means of them,we give the optimal strategies (Nash equilibrium strategies)and the optimal cost values for infinite horizon stochastic differential games.It indicates that the infinite horizon LQ stochastic differential games are associated with four coupled matrix-valued equations.Furthermore, an iterative algorithm is proposed to solve the four coupled equations.Finally,an example is given to demonstrate our results.  相似文献   

2.
This paper presents a solution to the discrete-time optimal control problem for stochastic nonlinear polynomial systems over linear observations and a quadratic criterion. The solution is obtained in two steps: the optimal control algorithm is developed for nonlinear polynomial systems by considering complete information when generating a control law. Then, the state estimate equations for discrete-time stochastic nonlinear polynomial system over linear observations are employed. The closed-form solution is finally obtained substituting the state estimates into the obtained control law. The designed optimal control algorithm can be applied to both distributed and lumped systems. To show effectiveness of the proposed controller, an illustrative example is presented for a second degree polynomial system. The obtained results are compared to the optimal control for the linearized system.  相似文献   

3.
In this paper we consider the stochastic optimal control problem of discrete-time Markov jump with multiplicative noise linear systems. The performance criterion is assumed to be formed by a linear combination of a quadratic part and a linear part in the state and control variables. The weighting matrices of the state and control for the quadratic part are allowed to be indefinite. We present a necessary and sufficient condition under which the problem is well posed and a state feedback solution can be derived from a set of coupled generalized Riccati difference equations interconnected with a set of coupled linear recursive equations. For the case in which the quadratic-term matrices are non-negative, this necessary and sufficient condition can be written in a more explicit way. The results are applied to a problem of portfolio optimization.  相似文献   

4.
Discrete-time coupled algebraic Riccati equations that arise in quadratic optimal control and H -control of Markovian jump linear systems are considered. First, the equations that arise from the quadratic optimal control problem are studied. The matrix cost is only assumed to be hermitian. Conditions for the existence of the maximal hermitian solution are derived in terms of the concept of mean square stabilizability and a convex set not being empty. A connection with convex optimization is established, leading to a numerical algorithm. A necessary and sufficient condition for the existence of a stabilizing solution (in the mean square sense) is derived. Sufficient conditions in terms of the usual observability and detectability tests for linear systems are also obtained. Finally, the coupled algebraic Riccati equations that arise from the H -control of discrete-time Markovian jump linear systems are analyzed. An algorithm for deriving a stabilizing solution, if it exists, is obtained. These results generalize and unify several previous ones presented in the literature of discrete-time coupled Riccati equations of Markovian jump linear systems. Date received: November 14, 1996. Date revised: January 12, 1999.  相似文献   

5.
This paper studies the classic linear quadratic regulation (LQR) problem for both continuous-time and discrete-time systems with multiple input delays. For discrete-time systems, the LQR problem for systems with single input delay has been studied in existing literature, whereas a solution to the multiple input delay case is not known to our knowledge. For continuous-time systems with multiple input delays, the LQR problem has been tackled via an infinite dimensional system theory approach and a frequency/time domain approach. The objective of the present paper is to give an explicit solution to the LQR problem via a simple and intuitive approach. The main contributions of the paper include a fundamental result of duality between the LQR problem for systems with multiple input delays and a smoothing problem for an associated backward stochastic system. The duality allows us to obtain a solution to the LQR problem via standard projection in linear space. The LQR controller is simply constructed by the solution of one backward Riccati difference (for the discrete-time case) or differential (for the continuous-time case) equation of the same order as the plant (ignoring the delays).  相似文献   

6.
An extension of the LQR/LQG methodology to systems with saturating actuators, referred to as SLQR/SLQG, where S stands for saturating, is obtained. The development is based on the method of stochastic linearization, whereby the saturation is replaced by a gain, calculated as a function of the variance of the signal at its input. Using the stochastically linearized system and the Lagrange multiplier technique, solutions of the SLQR/SLQG problems are derived. These solutions are given by standard Riccati and Lyapunov equations coupled with two transcendental equations, which characterize both the variance of the signal at the saturation input and the Lagrange multiplier associated with the constrained minimization problem. It is shown that, under standard stabilizability and detectability conditions, these equations have a unique solution, which can be found by a simple bisection algorithm. When the level of saturation tends to infinity, these equations reduce to their standard LQR/LQG counterparts. In addition, the paper investigates the properties of closed-loop systems with SLQR/SLQG controllers and saturating actuators. In this regard, it is shown that SLQR/SLQG controllers ensure semi-global stability by appropriate choice of a parameter in the performance criterion. Finally, the paper illustrates the techniques developed by a ship roll damping problem  相似文献   

7.
In this note, we consider the finite-horizon quadratic optimal control problem of discrete-time Markovian jump linear systems driven by a wide sense white noise sequence. We assume that the output variable and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed-loop system minimizes the quadratic functional cost of the system over a finite horizon period of time. As in the case with no jumps, we show that an optimal controller can be obtained from two coupled Riccati difference equations, one associated to the optimal control problem when the state variable is available, and the other one associated to the optimal filtering problem. This is a principle of separation for the finite horizon quadratic optimal control problem for discrete-time Markovian jump linear systems. When there is only one mode of operation our results coincide with the traditional separation principle for the linear quadratic Gaussian control of discrete-time linear systems.  相似文献   

8.
The regulation of discrete-time, constant, linear stochastic systems with quadratic performance criterion is considered. A fixed-dimensional, linear time-invariant controller is used. Algebraic matrix equations are derived whose solutions are the gains of the fixed-dimensional optimal controller.  相似文献   

9.
In this paper, the linear quadratic regulation problem for discrete-time systems with state delays and multiplicative noise is considered. The necessary and sufficient condition for the problem admitting a unique solution is given. Under this condition, the optimal feedback control and the optimal cost are presented via a set of coupled difference equations. Our approach is based on the maximum principle. The key technique is to establish relations between the costate and the state.  相似文献   

10.
Consider a discrete-time nonlinear system with random disturbances appearing in the real plant and the output channel where the randomly perturbed output is measurable. An iterative procedure based on the linear quadratic Gaussian optimal control model is developed for solving the optimal control of this stochastic system. The optimal state estimate provided by Kalman filtering theory and the optimal control law obtained from the linear quadratic regulator problem are then integrated into the dynamic integrated system optimisation and parameter estimation algorithm. The iterative solutions of the optimal control problem for the model obtained converge to the solution of the original optimal control problem of the discrete-time nonlinear system, despite model-reality differences, when the convergence is achieved. An illustrative example is solved using the method proposed. The results obtained show the effectiveness of the algorithm proposed.  相似文献   

11.
The transformation into discrete-time equivalents of digital optimal control problems, involving continuous-time linear systems with white stochastic parameters, and quadratic integral criteria, is considered. The system parameters have time-varying statistics. The observations available at the sampling instants are in general nonlinear and corrupted by discrete-time noise. The equivalent discrete-time system has white stochastic parameters. Expressions are derived for the first and second moment of these parameters and for the parameters of the equivalent discrete-time sum criterion, which are explicit in the parameters and statistics of the original digital optimal control problem. A numerical algorithm to compute these expressions is presented. For each sampling interval, the algorithm computes the expressions recursively, forward in time, using successive equidistant evaluations of the matrices which determine the original digital optimal control problem. The algorithm is illustrated with three examples. If the observations at the sampling instants are linear and corrupted by multiplicative and/or additive discrete-time white noise, then, using recent results, full and reduced-order controllers that solve the equivalent discrete-time optimal control problem can be computed.  相似文献   

12.
This paper investigates the linear quadratic regulation (LQR) problem for discrete-time systems with multiplicative noise. Multiplicative noise is usually assumed to be a scalar in existing literature works. Motivated by recent applications of networked control systems and MIMO communication technology, we consider multi-channel multiplicative noise represented by a diagonal matrix. We first show that the finite horizon LQR problem can be solved using a generalized Riccati equation. We then prove the convergence of the generalized Riccati equation under the conditions of stabilization and exact observability, and obtain the solution to the infinite horizon LQR problem. Finally, we provide a numerical example to demonstrate the proposed approach.  相似文献   

13.
This paper investigates the distributed linear quadratic regulation (LQR) controller design method for discrete-time homogeneous scalar systems. Based on the optimal centralised control theory, the existence condition for distributed optimal controller is firstly proposed. It shows that the globally optimal distributed controller is dependent on the structure of the penalty matrix. Such results can be used in consensus problems and used to find under which communication topology (may not be an all-to-all form) the optimal distributed controller exists. When the proposed condition cannot hold, a suboptimal design method with the aid of the decomposition of discrete algebraic Riccati equations and robustness of local controllers is proposed. The computation complexity and communication load for each subsystem are only dependent on the number of its neighbours.  相似文献   

14.
A new approach for the solution of the regulator problem for linear discrete-time dynamical systems with non-Gaussian disturbances is proposed. This approach generalizes a previous result concerning the definition of the quadratic optimal regulator. It consists of the definition of the polynomial optimal algorithm of an order /spl nu/ for the solution of the linear quadratic non-Gaussian stochastic regulator problem for systems with partial state information. The validity of the separation principle has also been proved in this case. Numerical simulations show the high performance of the proposed method with respect to the classical linear regulation techniques.  相似文献   

15.
A standard assumption in traditional (deterministic and stochastic) optimal (minimizing) linear quadratic regulator (LQR) theory is that the control weighting matrix in the cost functional is strictly positive definite. In the deterministic case, this assumption is in fact necessary for the problem to be well-posed because positive definiteness is required to make it a convex optimization problem. However, it has recently been shown that in the stochastic case, when the diffusion term is dependent on the control, the control weighting matrix may have negative eigenvalues but the problem remains well-posed. In this paper, the completely observed stochastic LQR problem with integral quadratic constraints is studied. Sufficient conditions for the well-posedness of this problem are given. Indeed, we show that in certain cases, these conditions may be satisfied, even when the control weighting matrices in the cost and all of the constraint functionals have negative eigenvalues. It is revealed that the seemingly nonconvex problem (with indefinite control weights) can actually be a convex one by virtue of the uncertainty in the system. Finally, when these conditions are satisfied, the optimal control is explicitly derived using results from duality theory  相似文献   

16.
A finite horizon linear quadratic (LQ) optimal control problem is studied for a class of discrete-time linear fractional systems (LFSs) affected by multiplicative, independent random perturbations. Based on the dynamic programming technique, two methods are proposed for solving this problem. The first one seems to be new and uses a linear, expanded-state model of the LFS. The LQ optimal control problem reduces to a similar one for stochastic linear systems and the solution is obtained by solving Riccati equations. The second method appeals to the principle of optimality and provides an algorithm for the computation of the optimal control and cost by using directly the fractional system. As expected, in both cases, the optimal control is a linear function in the state and can be computed by a computer program. A numerical example and comparative simulations of the optimal trajectory prove the effectiveness of the two methods. Some other simulations are obtained for different values of the fractional order.  相似文献   

17.
The optimal projection equations obtained in [2,3] for reduced-order, discrete-time state estimation are generalized to include the effects of state- and measurement-dependent noise to provide a model of parameter uncertainty. In contrast to the single matrix Riccati equation arising in the full-order (Kalman filter) case, the optimal steady-state reduced-order discrete-time estimator is characterized by three matrix equations (one modified Riccati equation and two modified Lyapunov equations) coupled by both an oblique projection and stochastic effects.  相似文献   

18.
In this paper, three optimal linear formation control algorithms are proposed for first-order linear multiagent systems from a linear quadratic regulator (LQR) perspective with cost functions consisting of both interaction energy cost and individual energy cost, because both the collective object (such as formation or consensus) and the individual goal of each agent are very important for the overall system. First, we propose the optimal formation algorithm for first-order multi-agent systems without initial physical couplings. The optimal control parameter matrix of the algorithm is the solution to an algebraic Riccati equation (ARE). It is shown that the matrix is the sum of a Laplacian matrix and a positive definite diagonal matrix. Next, for physically interconnected multi-agent systems, the optimal formation algorithm is presented, and the corresponding parameter matrix is given from the solution to a group of quadratic equations with one unknown. Finally, if the communication topology between agents is fixed, the local feedback gain is obtained from the solution to a quadratic equation with one unknown. The equation is derived from the derivative of the cost function with respect to the local feedback gain. Numerical examples are provided to validate the effectiveness of the proposed approaches and to illustrate the geometrical performances of multi-agent systems.  相似文献   

19.
A variational approach is taken to derive optimality conditions for a discrete-time linear quadratic adaptive stochastic optimal control problem. These conditions lead to an algorithm for computing optimal control laws which differs from the dynamic programming algorithm.  相似文献   

20.
In this paper, we study a linear‐quadratic optimal control problem for mean‐field stochastic differential equations driven by a Poisson random martingale measure and a one‐dimensional Brownian motion. Firstly, the existence and uniqueness of the optimal control is obtained by the classic convex variation principle. Secondly, by the duality method, the optimality system, also called the stochastic Hamilton system which turns out to be a linear fully coupled mean‐field forward‐backward stochastic differential equation with jumps, is derived to characterize the optimal control. Thirdly, applying a decoupling technique, we establish the connection between two Riccati equations and the stochastic Hamilton system and then prove the optimal control has a state feedback representation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号