首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
非负费用折扣半马氏决策过程   总被引:1,自引:0,他引:1  
黄永辉  郭先平 《数学学报》2010,53(3):503-514
本文考虑可数状态非负费用的折扣半马氏决策过程.首先在给定半马氏决策核和策略下构造一个连续时间半马氏决策过程,然后用最小非负解方法证明值函数满足最优方程和存在ε-最优平稳策略,并进一步给出最优策略的存在性条件及其一些性质.最后,给出了值迭代算法和一个数值算例.  相似文献   

2.
在文献[1]—[3]中在各自的条件下,讨论过非时齐折扣马氏决策模型及其ε(≥0)最优策略存在的条件.在文献[4],文献[5]中,在状态和行动集都是可数的条件下,讨论了具有绝对平均相对有界的无界报酬的时齐折扣马氏决策模型.本文在状态集仍为可数,行动集为任意的条件下,建立与[4]相应的非时齐的折扣马氏决策模型;给出模型的有限阶段逼近和建立最优方程;证明了ε(>0)最优马氏策略的存在性和行动集为有限集时最优  相似文献   

3.
本文是首次在转移率矩阵族为一般 Q 矩阵族(未必保守亦未必一致有界)的条件下,研究状态空间与决策集均为可数集的连续时间折扣矩最优模型(M_k-CTMDP);提出离散时间折扣依赖于状态与决策的拟折扣矩最优模型(β_k-GTMDP);并揭示二者之间的关系;给出在 f~∞下折扣总报酬 k 阶矩向量 μ_k(f)满足:kαμ_k(f)=kr(f)(?)μ_(k-1)(f)+Q(f)μ_k(f)及μ_k(f)=kP~(min)(kα,f)(r(f)(?)μ_(k-1)(f))的简洁表达式;给出报酬矩最优是矩最优方程组唯一有界解的一个很弱的充分条件与解法;给出矩最优策略存在的充要条件与若干性质.本文结果对 MDP 理论的发展与应用有重要意义,而且对跳跃型马氏过程的一类积分型泛函的研究与应用也颇有意义.  相似文献   

4.
林元烈 《数学学报》1992,35(1):8-19
本文是首次在转移率矩阵族为一般 Q 矩阵族(未必保守亦未必一致有界)的条件下,研究状态空间与决策集均为可数集的连续时间折扣矩最优模型(M_k-CTMDP);提出离散时间折扣依赖于状态与决策的拟折扣矩最优模型(β_k-GTMDP);并揭示二者之间的关系;给出在 f~∞下折扣总报酬 k 阶矩向量 μ_k(f)满足:kαμ_k(f)=kr(f)(?)μ_(k-1)(f)+Q(f)μ_k(f)及μ_k(f)=kP~(min)(kα,f)(r(f)(?)μ_(k-1)(f))的简洁表达式;给出报酬矩最优是矩最优方程组唯一有界解的一个很弱的充分条件与解法;给出矩最优策略存在的充要条件与若干性质.本文结果对 MDP 理论的发展与应用有重要意义,而且对跳跃型马氏过程的一类积分型泛函的研究与应用也颇有意义.  相似文献   

5.
本文讨论Lippman型无界报酬折扣半马氏决策规划ε最优策略的性质,在§2中证明了:若策略π~*=(π_0~*、π_1~*,…)是ε最优的,则对任何自然数n,策略(π_0~*,π_1~*,…,π_(n+)~*)为(1-β~n)~(-1)ε最优;若策略π~*=(f_0,f_1,…,f_n,π_(n+1),…)是ε最优的,则策略f_n~∞为某ε_n最优。在§3中讨论策略的组合与分解,在§4中给出了一个策略π~*为最优的充要条件和为ε最优的充分条件。  相似文献   

6.
董泽清  刘克 《中国科学A辑》1985,28(11):975-985
本文研究Lippmann型无界报酬折扣半马氏决策规划(简记为URSMDP)最优策略的结构。我们证明了:任给一策略,若它是a折扣最优的,则随机平稳策略,对同一a也是折扣最优的;对任给的整数n≥1,我们也给出了(在适当历史下)也是a折扣最优的充分条件;任一随机a折扣最优平稳策略必可分解为若干个决定性平稳最优策略(对同一a)的凸组合。从而较完满地解决了该模型最优策略的结构问题。  相似文献   

7.
本文讨论报酬为〔1〕中无界型和折扣马氏决策规划中的逐次逼近法,包括通常的逐次逼近法和有有限状态逼近可数状态问题中的逐次逼近法,讨论了两者的收敛性和后者界的估计。  相似文献   

8.
本文研究约束折扣半马氏决策规划问题,即在一折扣期望费用约束下,使折扣期望报酬达最大的约束最优问题,假设状态集可数,行动集为紧的非空Borel集,本文给出了p-约束最优策略的充要条件,证明了在适当的假设条件下必存在p-约束最优策略。  相似文献   

9.
本文讨论了可数状态空间、可数决策空间、次随机转移率族、有界报酬函数的马氏决策规划(MDP)的折扣模型,给出了一个非ε-最优策略的检验准则.  相似文献   

10.
本文对可数状态集、非空决策集、报酬无界的平均准则马氏决策过程,提出了一组新的条件,在此条件下存在(ε)最优平稳策略,且当最优不等式中的和有定义时最优不等式也成立。  相似文献   

11.
In this paper we study the optimal assignment of tasks to agents in a call center. For this type of problem, typically, no single deterministic and stationary (i.e., state independent and easily implementable) policy yields the optimal control, and mixed strategies are used. Other than finding the optimal mixed strategy, we propose to optimize the performance over the set of ??balanced?? deterministic periodic non-stationary policies. We provide a stochastic approximation algorithm that allows to find the optimal balanced policy by means of Monte Carlo simulation. As illustrated by numerical examples, the optimal balanced policy outperforms the optimal mixed strategy.  相似文献   

12.
In this paper, we discuss Markovian decision programming with recursive vector-reward andgive an algorithm to find optimal policies. We prove that: (1) There is a Markovian optimal policy for the nonstationary case; (2) Thereis a stationary optimal policy for the stationary case.  相似文献   

13.
Policy iteration is a well-studied algorithm for solving stationary Markov decision processes (MDPs). It has also been extended to robust stationary MDPs. For robust nonstationary MDPs, however, an “as is” execution of this algorithm is not possible because it would call for an infinite amount of computation in each iteration. We therefore present a policy iteration algorithm for robust nonstationary MDPs, which performs finitely implementable approximate variants of policy evaluation and policy improvement in each iteration. We prove that the sequence of cost-to-go functions produced by this algorithm monotonically converges pointwise to the optimal cost-to-go function; the policies generated converge subsequentially to an optimal policy.  相似文献   

14.
Continuous time Markovian decision models with countable state space are investigated. The existence of an optimal stationary policy is established for the expected average return criterion function. It is shown that the expected average return can be expressed as an expected discounted return of a related Markovian decision process. A policy iteration method is given which converges to an optimal deterministic policy, the policy so obtained is shown optimal over all Markov policies.  相似文献   

15.
This paper studies the risk minimization problem in semi-Markov decision processes with denumerable states. The criterion to be optimized is the risk probability (or risk function) that a first passage time to some target set doesn't exceed a threshold value. We first characterize such risk functions and the corresponding optimal value function, and prove that the optimal value function satisfies the optimality equation by using a successive approximation technique. Then, we present some properties of optimal policies, and further give conditions for the existence of optimal policies. In addition, a value iteration algorithm and a policy improvement method for obtaining respectively the optimal value function and optimal policies are developed. Finally, two examples are given to illustrate the value iteration procedure and essential characterization of the risk function.  相似文献   

16.
We consider continuous-time Markov decision processes in Polish spaces. The performance of a control policy is measured by the expected discounted reward criterion associated with state-dependent discount factors. All underlying Markov processes are determined by the given transition rates which are allowed to be unbounded, and the reward rates may have neither upper nor lower bounds. By using the dynamic programming approach, we establish the discounted reward optimality equation (DROE) and the existence and uniqueness of its solutions. Under suitable conditions, we also obtain a discounted optimal stationary policy which is optimal in the class of all randomized stationary policies. Moreover, when the transition rates are uniformly bounded, we provide an algorithm to compute (or?at least to approximate) the discounted reward optimal value function as well as a discounted optimal stationary policy. Finally, we use an example to illustrate our results. Specially, we first derive an explicit and exact solution to the DROE and an explicit expression of a discounted optimal stationary policy for such an example.  相似文献   

17.
18.
Decision makers often face the need of performance guarantee with some sufficiently high probability. Such problems can be modelled using a discrete time Markov decision process (MDP) with a probability criterion for the first achieving target value. The objective is to find a policy that maximizes the probability of the total discounted reward exceeding a target value in the preceding stages. We show that our formulation cannot be described by former models with standard criteria. We provide the properties of the objective functions, optimal value functions and optimal policies. An algorithm for computing the optimal policies for the finite horizon case is given. In this stochastic stopping model, we prove that there exists an optimal deterministic and stationary policy and the optimality equation has a unique solution. Using perturbation analysis, we approximate general models and prove the existence of e-optimal policy for finite state space. We give an example for the reliability of the satellite sy  相似文献   

19.
We consider the constrained optimization of a finite-state, finite action Markov chain. In the adaptive problem, the transition probabilities are assumed to be unknown, and no prior distribution on their values is given. We consider constrained optimization problems in terms of several cost criteria which are asymptotic in nature. For these criteria we show that it is possible to achieve the same optimal cost as in the non-adaptive case.We first formulate a constrained optimization problem under each of the cost criteria and establish the existence of optimal stationary policies.Since the adaptive problem is inherently non-stationary, we suggest a class ofAsymptotically Stationary (AS) policies, and show that, under each of the cost criteria, the costs of an AS policy depend only on its limiting behavior. This property implies that there exist optimal AS policies. A method for generating adaptive policies is then suggested, which leads to strongly consistent estimators for the unknown transition probabilities. A way to guarantee that these policies are also optimal is to couple them with the adaptive algorithm of [3]. This leads to optimal policies for each of the adaptive constrained optimization problems under discussion.This work was supported in part through United States-Israel Binational Science Foundation Grant BSF 85-00306.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号