首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
This work concerns the optimal regulation of single-input–single-output nonminimum-phase nonlinear processes. The problem of calculation of an ISE-optimal, statically equivalent, minimum-phase output for nonminimum-phase compensation is formulated using Hamilton–Jacobi theory and the normal form representation of the nonlinear system. A Newton–Kantorovich iteration is developed for the solution of the pertinent Hamilton–Jacobi equations, which involves solving a Zubov equation at each step of the iteration. The method is applied to the problem of controlling a nonisothermal CSTR with Van de Vusse kinetics, which exhibits nonminimum-phase behaviour.  相似文献   

2.
In this paper we consider nonautonomous optimal control problems of infinite horizon type, whose control actions are given by L1-functions. We verify that the value function is locally Lipschitz. The equivalence between dynamic programming inequalities and Hamilton–Jacobi–Bellman (HJB) inequalities for proximal sub (super) gradients is proven. Using this result we show that the value function is a Dini solution of the HJB equation. We obtain a verification result for the class of Dini sub-solutions of the HJB equation and also prove a minimax property of the value function with respect to the sets of Dini semi-solutions of the HJB equation. We introduce the concept of viscosity solutions of the HJB equation in infinite horizon and prove the equivalence between this and the concept of Dini solutions. In the Appendix we provide an existence theorem.  相似文献   

3.
4.
A deterministic optimal control problem is solved for a control-affine non-linear system with a non-quadratic cost function. We algebraically solve the Hamilton–Jacobi equation for the gradient of the value function. This eliminates the need to explicitly solve the solution of a Hamilton–Jacobi partial differential equation. We interpret the value function in terms of the control Lyapunov function. Then we provide the stabilizing controller and the stability margins. Furthermore, we derive an optimal controller for a control-affine non-linear system using the state dependent Riccati equation (SDRE) method; this method gives a similar optimal controller as the controller from the algebraic method. We also find the optimal controller when the cost function is the exponential-of-integral case, which is known as risk-sensitive (RS) control. Finally, we show that SDRE and RS methods give equivalent optimal controllers for non-linear deterministic systems. Examples demonstrate the proposed methods.  相似文献   

5.
This paper provides a solution to a new problem of global robust control for uncertain nonlinear systems. A new recursive design of stabilizing feedback control is proposed in which inverse optimality is achieved globally through the selection of generalized state-dependent scaling. The inverse optimal control law can always be designed such that its linearization is identical to linear optimal control, i.e. optimal control, for the linearized system with respect to a prescribed quadratic cost functional. Like other backstepping methods, this design is always successful for systems in strict-feedback form. The significance of the result stems from the fact that our controllers achieve desired level of ‘global’ robustness which is prescribed a priori. By uniting locally optimal robust control and global robust control with global inverse optimality, one can obtain global control laws with reasonable robustness without solving Hamilton–Jacobi equations directly.  相似文献   

6.
This paper introduces a valuation model of international pricing in the presence of political risk. Shipments between countries are charged with shipping costs and the country specific production processes are modelled as diffusion processes. The political risk is modelled as a continous time jump process that affects the drift of the returns in the politically unstable countries. The valuation model gives rise to a singular stochastic control problem that is analyzed numerically. The fundamental tools come from the theory of viscosity solutions of the associated Hamilton–Jacobi–Bellman equation which turns out to be a system of integral-differential Variational Inequalities with gradient constraints.  相似文献   

7.
It is known that the so-called control problem of a nonlinear system is locally solvable if the corresponding problem for the linearized system can be solved by linear feedback. In this paper we prove that this condition suffices to solve also a global control problem, for a fairly large class of nonlinear systems, if one is free to choose a state-dependent weight of the control input. Using a two-way (backward and forward) recursive induction argument, we simultaneously construct, starting from a solution of the Riccati algebraic equation, a global solution of the Hamilton–Jacobi–Isaacs partial differential equation arising in the nonlinear control, as well as a state feedback control law that achieves global disturbance attenuation with internal stability for the nonlinear systems.  相似文献   

8.
This paper is concerned with a stochastic linear-quadratic (LQ) problem in an infinite time horizon with multiplicative noises both in the state and the control. A distinctive feature of the problem under consideration is that the cost weighting matrices for the state and the control are allowed to be indefinite. A new type of algebraic Riccati equation – called a generalized algebraic Riccati equation (GARE) – is introduced which involves a matrix pseudo-inverse and two additional algebraic equality/inequality constraints. It is then shown that the well-posedness of the indefinite LQ problem is equivalent to a linear matrix inequality (LMI) condition, whereas the attainability of the LQ problem is equivalent to the existence of a “stabilizing solution” to the GARE. Moreover, all possible optimal controls are identified via the solution to the GARE. Finally, it is proved that the solution to the GARE can be obtained via solving a convex optimization problem called semidefinite programming.  相似文献   

9.
In this article, optimal control problems of differential equations with delays are investigated for which the associated Hamilton–Jacobi–Bellman (HJB) equations are nonlinear partial differential equations with delays. This type of HJB equation has not been previously studied and is difficult to solve because the state equations do not possess smoothing properties. We introduce a new notion of viscosity solutions and identify the value functional of the optimal control problems as the unique solution to the associated HJB equations. An analytical example is given as application.  相似文献   

10.
This paper deals with the regularity of solutions of the Hamilton–Jacobi inequality which arises in H control. It shows by explicit counterexamples that there are gaps between existence of continuous and locally Lipschitz (positive definite and proper) solutions, and between Lipschitz and continuously differentiable ones. On the other hand, it is shown that it is always possible to smooth-out solutions, provided that an infinitesimal increase in gain is allowed.  相似文献   

11.
We introduce the optimal control problem associated with ultradiffusion processes as a stochastic differential equation constrained optimization of the expected system performance over the set of feasible trajectories. The associated Bellman function is characterized as the solution to a Hamilton–Jacobi equation evaluated along an optimal process. For an important class of ultradiffusion processes, we define the value function in terms of the time and the natural state variables. Approximation solvability is shown and an application to mathematical finance demonstrates the applicability of the paradigm. In particular, we utilize a method-of-lines finite element method to approximate the value function of a European style call option in a market subject to asset liquidity risk (including limit orders) and brokerage fees.  相似文献   

12.
An approach to solve finite time horizon suboptimal feedback control problems for partial differential equations is proposed by solving dynamic programming equations on adaptive sparse grids. A semi-discrete optimal control problem is introduced and the feedback control is derived from the corresponding value function. The value function can be characterized as the solution of an evolutionary Hamilton–Jacobi Bellman (HJB) equation which is defined over a state space whose dimension is equal to the dimension of the underlying semi-discrete system. Besides a low dimensional semi-discretization it is important to solve the HJB equation efficiently to address the curse of dimensionality. We propose to apply a semi-Lagrangian scheme using spatially adaptive sparse grids. Sparse grids allow the discretization of the value functions in (higher) space dimensions since the curse of dimensionality of full grid methods arises to a much smaller extent. For additional efficiency an adaptive grid refinement procedure is explored. The approach is illustrated for the wave equation and an extension to equations of Schrödinger type is indicated. We present several numerical examples studying the effect the parameters characterizing the sparse grid have on the accuracy of the value function and the optimal trajectory.  相似文献   

13.
An investment problem is considered with dynamic mean–variance (M–V) portfolio criterion under discontinuous prices described by jump-diffusion processes. Some investment strategies are restricted in the study. This M–V portfolio with restrictions can lead to a stochastic optimal control model. The corresponding stochastic Hamilton–Jacobi–Bellman equation of the problem with linear and nonlinear constraints is derived. Numerical algorithms are presented for finding the optimal solution in this article. Finally, a computational experiment is to illustrate the proposed methods by comparing with M–V portfolio problem which does not have any constraints.  相似文献   

14.
In this paper, a new formulation for the optimal tracking control problem (OTCP) of continuous-time nonlinear systems is presented. This formulation extends the integral reinforcement learning (IRL) technique, a method for solving optimal regulation problems, to learn the solution to the OTCP. Unlike existing solutions to the OTCP, the proposed method does not need to have or to identify knowledge of the system drift dynamics, and it also takes into account the input constraints a priori. An augmented system composed of the error system dynamics and the command generator dynamics is used to introduce a new nonquadratic discounted performance function for the OTCP. This encodes the input constrains into the optimization problem. A tracking Hamilton–Jacobi–Bellman (HJB) equation associated with this nonquadratic performance function is derived which gives the optimal control solution. An online IRL algorithm is presented to learn the solution to the tracking HJB equation without knowing the system drift dynamics. Convergence to a near-optimal control solution and stability of the whole system are shown under a persistence of excitation condition. Simulation examples are provided to show the effectiveness of the proposed method.  相似文献   

15.
Optimal controllers guarantee many desirable properties including stability and robustness of the closed‐loop system. Unfortunately, the design of optimal controllers is generally very difficult because it requires solving an associated Hamilton–Jacobi–Bellman equation. In this paper we develop a new approach that allows the formulation of some nonlinear optimal control problems whose solution can be stated explicitly as a state‐feedback controller. The approach is based on using Young's inequality to derive explicit conditions by which the solution of the associated Hamilton–Jacobi–Bellman equation is simplified. This allows us to formulate large families of nonlinear optimal control problems with closed‐form solutions. We demonstrate this by developing optimal controllers for a Lotka–Volterra system. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
The solution of a stochastic control problem depends on the underlying model. The actual real world model may not be known precisely and so one solves the problem for a hypothetical model, that is in general different but close to the real one; the optimal (or nearly optimal) control of the hypothetical model is then used as solution for the real problem.In this paper, we assume that, what is not precisely known, is the underlying probability measure that determines the distribution of the random quantities driving the model. We investigate two ways to derive a bound on the suboptimality of the optimal control of the hypothetical problem when this control is used in the real problem. Both bounds are in terms of the Radon–Nikodym derivative of the underlying real world measure with respect to the hypothetical one. We finally investigate how the bounds compare to each other.  相似文献   

17.
-like control for nonlinear stochastic systems   总被引:1,自引:0,他引:1  
In this paper we develop a H-type theory, from the dissipation point of view, for a large class of time-continuous stochastic nonlinear systems. In particular, we introduce the notion of stochastic dissipative systems analogously to the familiar notion of dissipation associated with deterministic systems and utilize it as a basis for the development of our theory. Having discussed certain properties of stochastic dissipative systems, we consider time-varying nonlinear systems for which we establish a connection between what is called the L2-gain property and the solution to a certain Hamilton–Jacobi inequality (HJI), that may be viewed as a bounded real lemma for stochastic nonlinear systems. The time-invariant case with infinite horizon is also considered, where for this case we synthesize a worst case-based stabilizing controller. Stability in this case is taken to be in the mean-square sense. In the stationary case, the problem of robust state feedback control is considered in the case of norm-bounded uncertainties. A solution is then derived in terms of linear matrix inequalities.  相似文献   

18.
In a series of papers, we proved theorems characterizing the value function in exit time optimal control as the unique viscosity solution of the corresponding Bellman equation that satisfies appropriate side conditions. The results applied to problems which satisfy a positivity condition on the integral of the Lagrangian. This positive integral condition assigned a positive cost for remaining outside the target on any interval of positive length. In this note, we prove a new theorem which characterizes the exit time value function as the unique bounded-from-below viscosity solution of the Bellman equation that vanishes on the target. The theorem applies to problems satisfying an asymptotic condition on the trajectories, including cases where the positive integral condition is not satisfied. Our results are based on an extended version of “Barb lat's lemma”. We apply the theorem to variants of the Fuller problem and other examples where the Lagrangian is degenerate.  相似文献   

19.
A sufficient condition to solve an optimal control problem is to solve the Hamilton–Jacobi–Bellman (HJB) equation. However, finding a value function that satisfies the HJB equation for a nonlinear system is challenging. For an optimal control problem when a cost function is provided a priori, previous efforts have utilized feedback linearization methods which assume exact model knowledge, or have developed neural network (NN) approximations of the HJB value function. The result in this paper uses the implicit learning capabilities of the RISE control structure to learn the dynamics asymptotically. Specifically, a Lyapunov stability analysis is performed to show that the RISE feedback term asymptotically identifies the unknown dynamics, yielding semi-global asymptotic tracking. In addition, it is shown that the system converges to a state space system that has a quadratic performance index which has been optimized by an additional control element. An extension is included to illustrate how a NN can be combined with the previous results. Experimental results are given to demonstrate the proposed controllers.  相似文献   

20.
We consider a class of finite time horizon optimal control problems for continuous time linear systems with a convex cost, convex state constraints and non-convex control constraints. We propose a convex relaxation of the non-convex control constraints, and prove that the optimal solution of the relaxed problem is also an optimal solution for the original problem, which is referred to as the lossless convexification of the optimal control problem. The lossless convexification enables the use of interior point methods of convex optimization to obtain globally optimal solutions of the original non-convex optimal control problem. The solution approach is demonstrated on a number of planetary soft landing optimal control problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号