首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 52 毫秒
1.
部分可观察Markov决策过程是通过引入信念状态空间将非Markov链问题转化为Markov链问题来求解,其描述真实世界的特性使它成为研究随机决策过程的重要分支.介绍了部分可观察Markov决策过程的基本原理和决策过程,提出一种基于策略迭代和值迭代的部分可观察Markov决策算法,该算法利用线性规划和动态规划的思想,解决当信念状态空间较大时出现的"维数灾"问题,得到Markov决策的逼近最优解.实验数据表明该算法是可行的和有效的.  相似文献   

2.
徐明  刘广钟 《计算机应用》2015,35(11):3047-3050
针对水声传感器网络低带宽、高延迟特性造成的空时不确定性以及网络状态不能充分观察的问题,提出一种基于部分可观测马尔可夫决策过程(POMDP)的水声传感器网络介质访问控制协议.该协议首先将每个传感器节点的链路质量和剩余能量划分为多个离散等级来表达节点的状态信息.此后,接收节点通过信道状态观测和接入动作的历史信息对信道的占用概率进行预测,从而得出发送节点的信道最优调度策略;发送节点按照该策略中的调度序列在各自所分配的时隙内依次与接收节点进行通信,传输数据包.通信完成后,相关节点根据网络转移概率的统计量估计下一个时隙的状态.仿真实验表明,与传统的水声传感器网络介质访问控制协议相比,基于POMDP的介质访问控制协议可以提高数据包传输成功率和网络吞吐量,并且降低网络的能量消耗.  相似文献   

3.
近些年来,群体动画在机器人学、电影、游戏等领域得到了广泛的研究和应用,但传统的群体动画技术均涉及复杂的运动规划或碰撞避免操作,计算效率较低.本文提出了一种基于马尔可夫决策过程(MDPs)的群体动画运动轨迹生成算法,该算法无需碰撞检测即可生成各智能体的无碰撞运动轨迹.同时本文还提出了一种改进的值迭代算法用于求解马尔可夫决策过程的状态-值,利用该算法在栅格环境中进行实验,结果表明该算法的计算效率明显高于使用欧氏距离作为启发式的值迭代算法和Dijkstra算法.利用本文提出的运动轨迹生成算法在三维(3D)动画场景中进行群体动画仿真实验,结果表明该算法可实现群体无碰撞地朝向目标运动,并具有多样性.  相似文献   

4.
张汝波  孟雷  史长亭 《计算机应用》2015,35(8):2375-2379
针对智能水下机器人(AUV)软件故障修复过程中存在的修复代价过高和系统环境只有部分可观察的问题,提出了一种基于微重启技术和部分客观马尔可夫决策(POMDP)模型的AUV软件故障修复方法。该方法结合AUV软件系统分层结构特点,构建了基于微重启的三层重启结构,便于细粒度的自修复微重启策略的实施;并依据部分可观马尔可夫决策过程理论,给出AUV软件自修复POMDP模型,同时采用基于点的值迭代(PBVI)算法求解生成修复策略,以最小化累积修复代价为目标,使系统在部分可观环境下能够以较低的修复代价执行修复动作。仿真实验结果表明,基于微重启技术和POMDP模型的AUV软件故障修复方法能够解决由软件老化及系统调用引起的AUV软件故障,同与两层微重启策略和三层微重启固定策略相比,该方法在累积故障修复时间和运行稳定性上明显更优。  相似文献   

5.
马尔可夫决策过程两种抽象模式   总被引:1,自引:1,他引:1  
抽象层次上马尔可夫决策过程的引入,使得人们可简洁地、陈述地表达复杂的马尔可夫决策过程,解决常规马尔可夫决策过程(MDPs)在实际中所遇到的大型状态空间的表达问题.介绍了结构型和概括型两种不同类型抽象马尔可夫决策过程基本概念以及在各种典型抽象MDPs中的最优策略的精确或近似算法,其中包括与常规MDPs根本不同的一个算法:把Bellman方程推广到抽象状态空间的方法,并且对它们的研究历史进行总结和对它们的发展做一些展望,使得人们对它们有一个透彻的、全面而又重点的理解.  相似文献   

6.
随着物联网的发展,大规模的数据传输增大了网络负载和数据分流时间.为了优化数据分流时间,本文在边缘计算的环境中,提出基于马尔可夫决策过程(MDP)的分流节点选择策略.本文首先构建一个支持蜂窝和WiFi通信的网络模型,该模型由终端设备、分流节点和中心云构成,然后通过终端设备的位置和上传数据量的大小构建马尔可夫决策过程模型,...  相似文献   

7.
逻辑马尔可夫决策过程和关系马尔可夫决策过程的引入,使得人们可能简洁地、陈述地表达复杂的马尔可夫决策过程。本文首先介绍有关逻辑马尔可夫决策过程和关系马尔可夫决策过程的概念,然后重点介绍它们与普通的马尔可夫决策过程根本不同的一些算法:①依赖于基本状态空间RL的转换法;②把Bellman方程推广到抽象状态空间的方法;③利用策略偏置空间寻求近似最优策略方法。最后对它们的研究现状进行总结及其对它们发展的一些展望。  相似文献   

8.
本文利用(1)中的马尔可夫链,用FORTRAN语言设计了股票价格上扬,下跌平均时间的计算程序,利用马尔可夫决策,实现了股票买进卖出最佳策略的实用程序。  相似文献   

9.
多智能体系统分散式通信决策研究   总被引:1,自引:0,他引:1  
郑延斌  郭凌云  刘晶晶 《计算机应用》2012,32(10):2875-2878
通信是多智能体系统(MAS)之间协调与协作的最有效和最直接的方法,然而通信的代价却限制了该方法的使用。为了减少MAS协调过程中的通信量,提出一种启发式算法,使Agent仅选择能够改善团队期望回报的观察信息进行通信。实验结果证明,对通信信息的选择能够高效的利用通信带宽,有助于提高系统的性能。  相似文献   

10.
马尔可夫决策过程自适应决策的进展   总被引:6,自引:0,他引:6  
在介绍一般马尔可夫决策过程的基础上,分析了当前主要马尔可夫过程自适应决策方法的基本思想、具体算法实现以及相应结论,总结了现有马尔可夫过程自适应决策算法的特点,并指出了需要进一步解决的问题。  相似文献   

11.
This paper presents a method for learning decision theoretic models of human behaviors from video data. Our system learns relationships between the movements of a person, the context in which they are acting, and a utility function. This learning makes explicit that the meaning of a behavior to an observer is contained in its relationship to actions and outcomes. An agent wishing to capitalize on these relationships must learn to distinguish the behaviors according to how they help the agent to maximize utility. The model we use is a partially observable Markov decision process, or POMDP. The video observations are integrated into the POMDP using a dynamic Bayesian network that creates spatial and temporal abstractions amenable to decision making at the high level. The parameters of the model are learned from training data using an a posteriori constrained optimization technique based on the expectation-maximization algorithm. The system automatically discovers classes of behaviors and determines which are important for choosing actions that optimize over the utility of possible outcomes. This type of learning obviates the need for labeled data from expert knowledge about which behaviors are significant and removes bias about what behaviors may be useful to recognize in a particular situation. We show results in three interactions: a single player imitation game, a gestural robotic control problem, and a card game played by two people.  相似文献   

12.
《Artificial Intelligence》2007,171(8-9):453-490
This study extends the framework of partially observable Markov decision processes (POMDPs) to allow their parameters, i.e., the probability values in the state transition functions and the observation functions, to be imprecisely specified. It is shown that this extension can reduce the computational costs associated with the solution of these problems. First, the new framework, POMDPs with imprecise parameters (POMDPIPs), is formulated. We consider (1) the interval case, in which each parameter is imprecisely specified by an interval that indicates possible values of the parameter, and (2) the point-set case, in which each probability distribution is imprecisely specified by a set of possible distributions. Second, a new optimality criterion for POMDPIPs is introduced. As in POMDPs, the criterion is to regard a policy, i.e., an action-selection rule, as optimal if it maximizes the expected total reward. The expected total reward, however, cannot be calculated precisely in POMDPIPs, because of the parameter imprecision. Instead, we estimate the total reward by adopting arbitrary second-order beliefs, i.e., beliefs in the imprecisely specified state transition functions and observation functions. Although there are many possible choices for these second-order beliefs, we regard a policy as optimal as long as there is at least one of such choices with which the policy maximizes the total reward. Thus there can be multiple optimal policies for a POMDPIP. We regard these policies as equally optimal, and aim at obtaining one of them. By appropriately choosing which second-order beliefs to use in estimating the total reward, computational costs incurred in obtaining such an optimal policy can be reduced significantly. We provide an exact solution algorithm for POMDPIPs that does this efficiently. Third, the performance of such an optimal policy, as well as the computational complexity of the algorithm, are analyzed theoretically. Last, empirical studies show that our algorithm quickly obtains satisfactory policies to many POMDPIPs.  相似文献   

13.
In a spoken dialog system, determining which action a machine should take in a given situation is a difficult problem because automatic speech recognition is unreliable and hence the state of the conversation can never be known with certainty. Much of the research in spoken dialog systems centres on mitigating this uncertainty and recent work has focussed on three largely disparate techniques: parallel dialog state hypotheses, local use of confidence scores, and automated planning. While in isolation each of these approaches can improve action selection, taken together they currently lack a unified statistical framework that admits global optimization. In this paper we cast a spoken dialog system as a partially observable Markov decision process (POMDP). We show how this formulation unifies and extends existing techniques to form a single principled framework. A number of illustrations are used to show qualitatively the potential benefits of POMDPs compared to existing techniques, and empirical results from dialog simulations are presented which demonstrate significant quantitative gains. Finally, some of the key challenges to advancing this method – in particular scalability – are briefly outlined.  相似文献   

14.
We consider the problem of control of hierarchical Markov decision processes and develop a simulation based two-timescale actor-critic algorithm in a general framework. We also develop certain approximation algorithms that require less computation and satisfy a performance bound. One of the approximation algorithms is a three-timescale actor-critic algorithm while the other is a two-timescale algorithm, however, which operates in two separate stages. All our algorithms recursively update randomized policies using the simultaneous perturbation stochastic approximation (SPSA) methodology. We briefly present the convergence analysis of our algorithms. We then present numerical experiments on a problem of production planning in semiconductor fabs on which we compare the performance of all algorithms together with policy iteration. Algorithms based on certain Hadamard matrix based deterministic perturbations are found to show the best results.  相似文献   

15.
16.
Opacity is a generic security property, that has been defined on (non-probabilistic) transition systems and later on Markov chains with labels. For a secret predicate, given as a subset of runs, and a function describing the view of an external observer, the value of interest for opacity is a measure of the set of runs disclosing the secret. We extend this definition to the richer framework of Markov decision processes, where non-deterministic choice is combined with probabilistic transitions, and we study related decidability problems with partial or complete observation hypotheses for the schedulers. We prove that all questions are decidable with complete observation and ω-regular secrets. With partial observation, we prove that all quantitative questions are undecidable but the question whether a system is almost surely non-opaque becomes decidable for a restricted class of ω-regular secrets, as well as for all ω-regular secrets under finite-memory schedulers.  相似文献   

17.
This communique presents an algorithm called “value set iteration” (VSI) for solving infinite horizon discounted Markov decision processes with finite state and action spaces as a simple generalization of value iteration (VI) and as a counterpart to Chang’s policy set iteration. A sequence of value functions is generated by VSI based on manipulating a set of value functions at each iteration and it converges to the optimal value function. VSI preserves convergence properties of VI while converging no slower than VI and in particular, if the set used in VSI contains the value functions of independently generated sample-policies from a given distribution and a properly defined policy switching policy, a probabilistic exponential convergence rate of VSI can be established. Because the set used in VSI can contain the value functions of any policies generated by other existing algorithms, VSI is also a general framework of combining multiple solution methods.  相似文献   

18.
Weighted Markov decision processes (MDPs) have long been used to model quantitative aspects of systems in the presence of uncertainty. However, much of the literature on such MDPs takes a monolithic approach, by modelling a system as a particular MDP; properties of the system are then inferred by analysis of that particular MDP. In contrast in this paper we develop compositional methods for reasoning about weighted MDPs, as a possible basis for compositional reasoning about their quantitative behaviour. In particular we approach these systems from a process algebraic point of view. For these we define a coinductive simulation-based behavioural preorder which is compositional in the sense that it is preserved by structural operators for constructing weighted MDPs from components.  相似文献   

19.
This paper considers a Markov decision process in Borel state and action spaces with the aggregated (or say iterated) coherent risk measure to be minimised. For this problem, we establish the Bellman optimality equation as well as the value and policy iteration algorithms, and show the existence of a deterministic stationary optimal policy. The cost function, while being allowed to be unbounded from below (in the sense that its negative part needs be bounded by some nonnegative real-valued possibly unbounded weight function), can be arbitrarily unbounded from above and possibly infinitely valued.  相似文献   

20.
For a countable-state Markov decision process we introduce an embedding which produces a finite-state Markov decision process. The finite-state embedded process has the same optimal cost, and moreover, it has the same dynamics as the original process when restricting to the approximating set. The embedded process can be used as an approximation which, being finite, is more convenient for computation and implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号

京公网安备 11010802026262号