首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Stochastic differential game techniques are applied to compare the performance of a medium-range air-to-air missile for three different thrust-mass profiles. The measure of the performance of the missile is the probability that it will reach a lock-on point with a favorable range of guidance and flight parameters during a fixed time interval [0,t f ].  相似文献   

3.
In this paper, we study an inverse optimal problem in discrete-time stochastic control. We give necessary and sufficient conditions for a solution to a system of stochastic difference equations to be the solution of a certain optimal control problem. Our results extend to the stochastic case the work of Dechert. In particular, we present a stochastic version of an important principle in welfare economics.  相似文献   

4.
In this paper, necessary conditions of optimality, in the form of a maximum principle, are obtained for singular stochastic control problems. This maximum principle is derived for a state process satisfying a general stochastic differential equation where the coefficient associated to the control process can be dependent on the state, extending earlier results of the literature.  相似文献   

5.
In this paper, we discuss an application of the Stochastic Dual Dynamic Programming (SDDP) type algorithm to nested risk-averse formulations of Stochastic Optimal Control (SOC) problems. We propose a construction of a statistical upper bound for the optimal value of risk-averse SOC problems. This outlines an approach to a solution of a long standing problem in that area of research. The bound holds for a large class of convex and monotone conditional risk mappings. Finally, we show the validity of the statistical upper bound to solve a real-life stochastic hydro-thermal planning problem.  相似文献   

6.
In this paper we describe the algorithm OPTCON which has been developed for the optimal control of nonlinear stochastic models. It can be applied to obtain approximate numerical solutions of control problems where the objective function is quadratic and the dynamic system is nonlinear. In addition to the usual additive uncertainty, some or all of the parameters of the model may be stochastic variables. The optimal values of the control variables are computed in an iterative fashion: First, the time-invariant nonlinear system is linearized around a reference path and approximated by a time-varying linear system. Second, this new problem is solved by applying Bellman's principle of optimality. The resulting feedback equations are used to project expected optimal state and control variables. These projections then serve as a new reference path, and the two steps are repeated until convergence is reached. The algorithm has been implemented in the statistical programming system GAUSS. We derive some mathematical results needed for the algorithm and give an overview of the structure of OPTCON. Moreover, we report on some tentative applications of OPTCON to two small macroeconometric models for Austria.  相似文献   

7.
8.
We prove a duality theorem for the stochastic optimal control problem with a convex cost function and show that the minimizer satisfies a class of forward–backward stochastic differential equations. As an application, we give an approach, from the duality theorem, to hh-path processes for diffusion processes.  相似文献   

9.
This paper deals with a stochastic optimal control problem where the randomness is essentially concentrated in the stopping time terminating the process. If the stopping time is characterized by an intensity depending on the state and control variables, one can reformulate the problem equivalently as an infinite-horizon optimal control problem. Applying dynamic programming and minimum principle techniques to this associated deterministic control problem yields specific optimality conditions for the original stochastic control problem. It is also possible to characterize extremal steady states. The model is illustrated by an example related to the economics of technological innovation.This research has been supported by NSERC-Canada, Grants 36444 and A4952; by FCAR-Québec, Grant 88EQ3528, Actions Structurantes; and by MESS-Québec, Grant 6.1/7.4(28).  相似文献   

10.
The purpose of this paper is to establish the first and second order necessary conditions for stochastic optimal controls in infinite dimensions. The control system is governed by a stochastic evolution equation, in which both drift and diffusion terms may contain the control variable and the set of controls is allowed to be nonconvex. Only one adjoint equation is introduced to derive the first order necessary optimality condition either by means of the classical variational analysis approach or, under an additional assumption, by using differential calculus of set-valued maps. More importantly, in order to avoid the essential difficulty with the well-posedness of higher order adjoint equations, using again the classical variational analysis approach, only the first and the second order adjoint equations are needed to formulate the second order necessary optimality condition, in which the solutions to the second order adjoint equation are understood in the sense of the relaxed transposition.  相似文献   

11.
In this work, the following problem is considered. A rigid body is moving immersed in a fluid of infinite depth. Its goal is to hit another body which floats on the surface of the fluid. Using stochastic control methods, a sequence of subproblems concerning the guidance and control of the immersed rigid body are dealt with.  相似文献   

12.
In this paper we study mathematically and computationally optimal control problems for stochastic elliptic partial differential equations. The control objective is to minimize the expectation of a tracking cost functional, and the control is of the deterministic, distributed type. The main analytical tool is the Wiener-Itô chaos or the Karhunen-Loève expansion. Mathematically, we prove the existence of an optimal solution; we establish the validity of the Lagrange multiplier rule and obtain a stochastic optimality system of equations; we represent the input data in their Wiener-Itô chaos expansions and deduce the deterministic optimality system of equations. Computationally, we approximate the optimality system through the discretizations of the probability space and the spatial space by the finite element method; we also derive error estimates in terms of both types of discretizations.  相似文献   

13.
We investigate regularity conditions in optimal control problems with mixed constraints of a general geometric type, in which a closed non-convex constraint set appears. A closely related question to this issue concerns the derivation of necessary optimality conditions under some regularity conditions on the constraints. By imposing strong and weak regularity condition on the constraints, we provide necessary optimality conditions in the form of Pontryagin maximum principle for the control problem with mixed constraints. The optimality conditions obtained here turn out to be more general than earlier results even in the case when the constraint set is convex. The proofs of our main results are based on a series of technical lemmas which are gathered in the Appendix.  相似文献   

14.
The optimal control problem with state constraints is examined. An alternative to the available approaches to the study of this problem is proposed. The maximum principle and second-order necessary conditions are proved.  相似文献   

15.
《Optimization》2012,61(6):833-849
A family of linear-quadratic optimal control problems with pointwise mixed state-control constraints governed by linear elliptic partial differential equations is considered. All data depend on a vector parameter of perturbations. Lipschitz stability with respect to perturbations of the optimal control, the state and adjoint variables, and the Lagrange multipliers is established.  相似文献   

16.
On the existence of optimal solutions in a stochastic control model   总被引:1,自引:0,他引:1  
An existence result for a stochastic control model with chance constraints, obtained by Christopeit (Ref. 1), is considerably generalized by combining a standard isometry property of Wiener integrals with a well-known lower semicontinuity result for integral functionals.  相似文献   

17.
In this paper, we consider a class of nonstationary control systems. We obtain some conditions for stationary optimal controls of such systems. The results are then applied to a linear-quadratic system.  相似文献   

18.
ABSTRACT

We study optimal control of stochastic Volterra integral equations (SVIE) with jumps by using Hida-Malliavin calculus.
  • We give conditions under which there exist unique solutions of such equations.

  • Then we prove both a sufficient maximum principle (a verification theorem) and a necessary maximum principle via Hida-Malliavin calculus.

  • As an application we solve a problem of optimal consumption from a cash flow modelled by an SVIE.

  相似文献   

19.
Practical industrial process is usually a dynamic process including uncertainty. Stochastic constraints can be used for industrial process modeling, when system sate and/or control input constraints cannot be strictly satisfied. Thus, optimal control of switched systems with stochastic constraints can be available to address practical industrial process problems with different modes. In general, obtaining an analytical solution of the optimal control problem is usually very difficult due to the discrete nature of the switching law and the complexity of stochastic constraints. To obtain a numerical solution, this problem is formulated as a constrained nonlinear parameter selection problem (CNPSP) based on a relaxation transformation (RT) technique, an adaptive sample approximation (ASA) method, a smooth approximation (SA) technique, and a control parameterization (CP) method. Following that, a penalty function-based random search (PFRS) algorithm is designed for solving the CNPSP based on a novel search rule-based penalty function (NSRPF) method and a novel random search (NRS) algorithm. The convergence results show that the proposed method is globally convergent. Finally, an optimal control problem in automobile test-driving with gear shifts (ATGS) is further extended to illustrate the effectiveness of the proposed method by taking into account some stochastic constraints. Numerical results show that compared with other typical methods, the proposed method is less conservative and can obtain a stable and robust performance when considering the small perturbations in initial system state. In addition, to balance the computation amount and the numerical solution accuracy, a tolerance setting method is also provided by the numerical analysis technique.  相似文献   

20.
Stochastic optimal control of internal hierarchical labor markets   总被引:1,自引:0,他引:1  
This paper develops an optimal control model for a graded manpower system where the demand for manpower is uncertain. The organization's objective is to minimize the discounted costs of operating the manpower system, including excess demand costs. The stock of workers in various grades can be adjusted in two ways. The first method is outside hiring flows, which is the usual control variable used in previous research. The second method is to control the transition rates between grades of the hierarchy, an instrument not previously studied. Incorporating the transition rates into the control variables creates time lags in the control process. The resulting problem is solved numerically using an approximation for the time-lagged control variables. The numerical example is based on the Air Force officer hierarchy. The model is used to examine such issues as the desirability of granting tenure to workers who are not promoted to the highest grade and the effects of length-of-service and demand uncertainty on manpower policy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号