首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 0 毫秒
1.
ABSTRACT

This paper deals with partial information stochastic optimal control problem for general controlled mean-field systems driven by Teugels martingales associated with some Lévy process having moments of all orders, and an independent Brownian motion. The coefficients of the system depend on the state of the solution process as well as of its probability law and the control variable. We establish a set of necessary conditions in the form of Pontryagin maximum principle for the optimal control. We also give additional conditions, under which the necessary optimality conditions turn out to be sufficient. The proof of our result is based on the derivative with respect to the probability law by applying Lions derivatives and a corresponding Itô formula. As an application, conditional mean-variance portfolio selection problem in incomplete market, where the system is governed by some Gamma process is studied to illustrate our theoretical results.  相似文献   

2.
In this paper, we are concerned with the approximate controllability of stochastic differential systems driven by Teugels martingales associated with a Lévy process. We derive the approximate controllability with the coefficients in the system satisfying some non-Lipschitz conditions, which include classic Lipschitz conditions as special cases. The desired result is established by means of standard Picard’s iteration.  相似文献   

3.
ABSTRACT

In this paper, we introduce a new class of backward doubly stochastic differential equations (in short BDSDE) called mean-field backward doubly stochastic differential equations (in short MFBDSDE) driven by Itô-Lévy processes and study the partial information optimal control problems for backward doubly stochastic systems driven by Itô-Lévy processes of mean-field type, in which the coefficients depend on not only the solution processes but also their expected values. First, using the method of contraction mapping, we prove the existence and uniqueness of the solutions to this kind of MFBDSDE. Then, by the method of convex variation and duality technique, we establish a sufficient and necessary stochastic maximum principle for the stochastic system. Finally, we illustrate our theoretical results by an application to a stochastic linear quadratic optimal control problem of a mean-field backward doubly stochastic system driven by Itô-Lévy processes.  相似文献   

4.
In this paper, we prove the existence and uniqueness of a solution for a class of multi-valued stochastic differential equations driven by G-Brownian motion (MSDEG) by means of the Yosida approximation method. Moreover, we set up an optimality principle of stochastic control problem and prove the value function of the control problem is the unique viscosity solution of a class of nonlinear partial differential variational inequalities.  相似文献   

5.
In this paper, we consider risk‐sensitive optimal control and differential games for stochastic differential delayed equations driven by Brownian motion. The problems are related to robust stochastic optimization with delay due to the inherent feature of the risk‐sensitive objective functional. For both problems, by using the logarithmic transformation of the associated risk‐neutral problem, the necessary and sufficient conditions for the risk‐sensitive maximum principle are obtained. We show that these conditions are characterized in terms of the variational inequality and the coupled anticipated backward stochastic differential equations (ABSDEs). The coupled ABSDEs consist of the first‐order adjoint equation and an additional scalar ABSDE, where the latter is induced due to the nonsmooth nonlinear transformation of the adjoint process of the associated risk‐neutral problem. For applications, we consider the risk‐sensitive linear‐quadratic control and game problems with delay, and the optimal consumption and production game, for which we obtain explicit optimal solutions.  相似文献   

6.
In this paper, optimal control problems for multi-stage and continuous-time linear singular systems are both considered. The singular systems are assumed to be regular and impulse-free. First, a recurrence equation is derived according to Bellman's principle of optimality in dynamic programming. Then, by applying the recurrence equation, bang-bang optimal controls for the control problems with linear objective functions subject to two types of multi-stage singular systems are obtained. Second, employing the principle of optimality, a equation of optimality for settling the optimal control problem subject to a class of continuous-time singular systems is proposed. The optimal control problem may become simpler through solving this equation of optimality. Two numerical examples and a dynamic input–output model are presented to show the effectiveness of the results obtained.  相似文献   

7.
8.
In this paper we consider the problem of optimal regulation of large space structures in the presence of flexible appendages. For simplicity of presentation, we consider a spacecraft consisting of a rigid bus and a flexible beam. The complete dynamics of the system is given by a coupled set of ordinary and partial differential equations. We show that the solution of this hybrid system is defined in a product space of appropriate finite- and infinite-dimensional spaces. We develop necessary conditions for determining the control torque and forces for optimal regulation of attitude maneuvers of the satellite along with simultaneous suppression of elastic vibrations of the flexible beam.  相似文献   

9.
Stochastic model predictive control hinges on the online solution of a stochastic optimal control problem. This paper presents a computationally efficient solution method for stochastic optimal control for nonlinear systems subject to (time‐varying) stochastic disturbances and (time‐invariant) probabilistic model uncertainty in initial conditions and parameters. To this end, new methods are presented for joint propagation of time‐varying and time‐invariant probabilistic uncertainty and the nonconservative approximation of joint chance constraint (JCC) on the system state. The proposed uncertainty propagation method relies on generalized polynomial chaos and conditional probability rules to obtain tractable expressions for the state mean and covariance matrix. A moment‐based surrogate is presented for JCC approximation to circumvent construction of the full probability distribution of the state or the use of integer variables as required when using the sample average approximation. The proposed solution method for stochastic optimal control is illustrated on a nonlinear semibatch reactor case study in the presence of probabilistic model uncertainty and stochastic disturbances. It is shown that the proposed solution method is significantly superior to a standard random sampling method for stochastic optimal control in terms of computational requirements. Furthermore, the moment‐based surrogate for the JCC is shown to be substantially less conservative than the widely used distributionally robust Cantelli‐Chebyshev inequality for chance constraint approximation.  相似文献   

10.
We introduce adaptive policies for discrete-time, infinite horizon, stochastic control systems x1+1 = F(x1, a1, ξ1, T =0, 1, …, with discounted reward criterion, where the disturbance process ξ1 is a sequence of i.i.d. random elements with unknown distribution. These policies are shown to be asymptotically optimal and for each of them we obtain (almost surely) uniform approximations of the optimal reward function.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号