首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a stochastic approach for optimization of control parameters ( probabilities of crossover and mutation ) in genetic algorithms ( GAs ) . The genetic search can be modelled as a controlled Markovian process, the transition of which depends on the control parameters. A stochastic optimization problem is formed for control of GA parameters, based on a given performance index of populations and analysed as a controlled Markovian process during the genetic search. The optimal values of control parameters can be found from a recursive estimation of control parameters, which is obtained by introducing a stochastic gradient of the performance index and using a stochastic approximation algorithm. The algorithm possesses the capability of finding the stochastic gradient and adapting the control parameters in the direction of descent. A non-stationary Markov model is developed to investigate asymptotic convergence properties of the proposed genetic algorithm. It is proved that the proposed genetic algorithm would asymptotically converge. Numerical results based on the classical functions are obtained to show the potential of the proposed algorithm.  相似文献   

2.
Petri nets represent a useful tool for performance, dependability, and performability analysis of complex systems. Their modeling power can be increased even more if nonexponentially distributed events are considered. However, the inclusion of nonexponential distributions destroys the memoryless property and requires to specify how the marking process is conditioned upon its past history. We consider, in particular, the class of stochastic Petri nets whose marking process can be mapped into a Markov regenerative process. An adequate mathematical framework is developed to deal with the considered class of Markov Regenerative Stochastic Petri Nets (MRSPN). A unified approach for the solution of MRSPNs where different preemption policies can be defined in the same model is presented. The solution is provided both in steady-state and in transient condition. An example concludes the paper  相似文献   

3.
It is well known that stochastic control systems can be viewed as Markov decision processes (MDPs) with continuous state spaces. In this paper, we propose to apply the policy iteration approach in MDPs to the optimal control problem of stochastic systems. We first provide an optimality equation based on performance potentials and develop a policy iteration procedure. Then we apply policy iteration to the jump linear quadratic problem and obtain the coupled Riccati equations for their optimal solutions. The approach is applicable to linear as well as nonlinear systems and can be implemented on-line on real world systems without identifying all the system structure and parameters.  相似文献   

4.
Optimizing aircraft collision avoidance and performing trajectory optimization are the key problems in an air transportation system. This paper is focused on solving these problems by using a stochastic optimal control approach. The major contribution of this paper is a proposed stochastic optimal control algorithm to dynamically adjust and optimize aircraft trajectory. In addition, this algorithm accounts for random wind dynamics and convective weather areas with changing size. Although the system is modeled by a stochastic differential equation, the optimal feedback control for this equation can be computed as a solution of a partial differential equation, namely, an elliptic Hamilton‐Jacobi‐Bellman equation. In this paper, we solve this equation numerically using a Markov Chain approximation approach, where a comparison of three different iterative methods and two different optimization search methods are presented. Simulations show that the proposed method provides better performance in reducing conflict probability in the system and that it is feasible for real applications.  相似文献   

5.
6.
This paper considers robust stochastic stability and PI tracking control problem for Markov jump systems with both input delay and an unknown nonlinear function. Based on the traditional PI control strategy, a new controller design scheme is proposed for nonlinear time-delay Markov jump systems which can realize multiple control objectives including robust stochastic stability and tracking performance. By using the Lyapunov stability theory and LMI algorithms, a sufficient condition for the solution to robust stochastic stability and tracking control problem is obtained. Then, the desired controller with PI structure is designed, which ensures the resulting closed-loop system is robust stochastically stable and the system state has favorable tracking performance. Finally, a numerical example is provided to illustrate the effectiveness of the proposed results.  相似文献   

7.
A hidden Markov model for the traffic congestion control problem in transmission control protocol (TCP) networks is developed, and the question of observability of this system is posed. Of specific interest are the dependence of observability on the congestion control law and the interaction between observability ideas and the effectiveness of feedback control. Analysis proceeds with a survey of observability concepts and an extension of some available definitions for linear and nonlinear stochastic systems. The key idea is to link the improvement of state estimator performance to the conditioning on the output data sequence. The observability development proceeds from linear deterministic systems to linear Gaussian systems, nonlinear systems, etc., with backwards compatibility to deterministic ideas. The principal concepts relate to the entropy decrease of scalar functions of the state, which in the linear case are describable in terms of covariance matrices. A feature of nonlinear systems is that the estimator properties may affect the closed-loop control performance. Results are derived linking stochastic reconstructibility to strict improvement of the optimal closed-loop control performance over open-loop control for the hidden Markov model. The entropy provides a means to quantify and thus order simulation results for a simplified TCP network. Motivated by the link between feedback control and reconstructibility, the entropy formulation is also explored as a means to discriminate between different control strategies for improving estimator performance. This approach has connections to dual-adaptive control ideas, where the control has the simultaneous and opposing goals of regulating the system and of exciting the system to prevent estimator divergence.  相似文献   

8.
We consider the dynamic control of two queues competing for the services of one server. The problem is to design a server time allocation strategy, when the sizes of the queues are not observable. The performance criterion used is total expected aggregate delay. The server is assumed to observe arrivals but not departures. This problem is formulated as a stochastic optimal control problem with partial observations. The framework we adopt is that of stochastic control in discrete time and countable "state space." The observations are modeled as discrete time, 0-1 point processes with rates that are influenced by a Markov chain. Examples from computer control of urban traffic are given, to illustrate the practical motivation behind the present work, and to relate to earlier work by us on the subject. A particular feature of the formulation is that the observations are influenced by transitions of the state of the Markov chain. The classical tools of simple Bayes rule and dynamic programming suffice for the analysis. In particular, we show that the "one step" predicted density for the state of the Markov chain, given the point process observations is a sufficient statistic for control. This framework is then applied to the specific problem of two queues competing for the services of one server. We obtain explicit solutions for the finite time expected aggregate delay problem. The implications of these results for practical applications as well as implementation aspects of the resulting optimal control laws are discussed.  相似文献   

9.
基于平均场退火的二值纹理图象恢复   总被引:2,自引:1,他引:1  
汪涛  俞瑞钊 《计算机学报》1994,17(8):618-623
本文根据平均场退火技术,提出了一种二值纹理图象的估计和恢复算法,纹理图象描述为一个马尔可夫随机场模型和噪声过程的综合结果,算法递归地进行模型参数估计和图象恢复,其核心是一个统计松驰搜索算法,平均场方法将统计松驰方程转化为一组确定性方程,从而有效地提高了计算效率,对二值噪声纹理图象的实验结果说明了算法的有效性。  相似文献   

10.
针对网络控制系统中存在于传感器-控制器-执行器间的双时延问题,提出了一种基于Markov模型的状态反馈控制策略.与传统应用Markov随机过程的方式相比,该策略采用两个Markov链描述每一个时延,通过状态反馈把该随机系统描述为具有四个随机参量的离散Markov跳变系统.利用Lyapunov有限时间稳定性理论分析得到该系统稳定的充分条件,并利用线性矩阵不等式(LMI)得到了可行的反馈矩阵.数值仿真结果进一步证明了该策略的有效性.  相似文献   

11.
刘义才  刘斌  张永  李维刚 《控制与决策》2017,32(9):1565-1573
针对具有双边随机网络时延和丢包的网络控制系统,首先基于Markov的随机过程描述系统丢包的特性;然后利用系统增广矩阵的方法建立参数不确定的离散时间跳变系统模型,在考虑Markov链转移概率矩阵中部分元素未知甚至完全未知的条件下,采用Lyapunov稳定理论和随机理论的分析方法,设计依赖于丢包特性且满足系统均方稳定要求的时变控制器;最后通过数值算例仿真表明所提出方法的有效性.  相似文献   

12.
研究一类具有数据包丢失及状态转移概率部分未知的网络控制系统随机稳定性及H∞控制问题.传感器与控制器之间、控制器与执行器之间存在数据包丢失的网络控制系统被建模成具有4个子系统的跳变系统,4个子系统之间的跳变遵行Markov跳变过程,并具有部分未知的跳变概率.利用Lyapunov稳定性定理及线性矩阵不等式的求解方法得到该类系统随机稳定的充分条件,并给出了相应的∞状态反馈控制器的设计方法.数值仿真结果验证了本文方法的正确性和有效性.  相似文献   

13.
Performance analysis of distributed real-time databases   总被引:3,自引:0,他引:3  
In a distributed process control system, information about the behavior of physical processes is usually collected and stored in a real-time database which can be remotely accessed by human operators. In this paper we propose an analytic approach to compute the response-time distribution of operator consoles in a distributed process control environment. The technique we develop is based on Markov regenerative processes (MRGPs) and described with the assistance of deterministic and stochastic Petri nets (DSPNs). We construct exact models for performance analysis of centralized and decentralized database architectures. However, due to limitations on the exact solution, we also propose an approximate solution which is then used to study response-time distributions of large systems.  相似文献   

14.
随机模型检测连续时间Markov过程   总被引:1,自引:1,他引:0  
功能正确和性能可满足是复杂系统可信要求非常重要的两个方面。从定性验证和定量分析相结合的角度,对复杂并发系统进行功能验证和性能分析,统一地评估系统是否可信。连续时间Markov决策过程CTMDP(Continuous time Markov decision process)能够统一刻画复杂系统的概率选择、随机时间及不确定性等重要特征。提出用CTMDP作为系统定性验证和定量分析模型,将复杂系统的功能验证和性能分析转化为CTMDP中的可达概率求解,并证明验证过程的正确性,最终借助模型检测器MRMC(Markov Reward Model Chcckcr)实现模型检测。理论分析表明,提出的针对CI'MDP模型的验证需求是必要的,验证思路和方法具有可行性。  相似文献   

15.
We examine the parallel execution of a class of stochastic algorithms called Markov chain Monte-Carlo (MCMC) algorithms. We focus on MCMC algorithms in the context of image processing, using Markov random field models. Our parallelisation approach is based on several, concurrently running, instances of the same stochastic algorithm that deal with the whole data set. Firstly we show that the speed-up of the parallel algorithm is limited because of the statistical properties of the MCMC algorithm. We examine coupled MCMC as a remedy for this problem. Secondly, we exploit the parallel execution to monitor the convergence of the stochastic algorithms in a statistically reliable manner. This new convergence measure for MCMC algorithms performs well, and is an improvement on known convergence measures. We also link our findings with recent work in the statistical theory of MCMC.  相似文献   

16.
This paper presents a stochastic modelling framework based on stochastic automata networks (SANs) for the analysis of complex biochemical reaction networks. Our approach takes into account the discrete character of quantities of components (i.e. the individual populations of the involved chemical species) and the inherent probabilistic nature of microscopic molecular collisions. Moreover, as for process calculi that have recently been applied to systems in biology, the SAN approach has the advantage of a modular design process being adequate for abstraction purposes. The associated composition operator leads to an elegant and compact representation of the underlying continuous-time Markov chain in form of a Kronecker product. SANs have been extensively used in performance analysis of computer systems and a large variety of numerical and simulative analysis algorithms exist. We illustrate that describing a biochemical reaction network by means of a SAN offers promising opportunities to get insight into the quantitative behaviour of systems in biology while taking advantage of the benefits of a compositional modelling approach.  相似文献   

17.
随机Petri网是一种系统设计和分析工具,它可以对系统进行定性分析和定量分析。为了有效利用随机Petri网进行性能的定量分析,根据随机Petri网模型转换为马尔可夫链的算法,总结并实现了它们之间的转换规则。该转换规则在变迁实施的过程中引入演变规则和合并规则,将随机Petri网模型转换为马尔可夫链。可以利用产生的马尔可夫链对随机Petri网模型的多项性能指标进行定量分析。实验结果表明,转换规则是正确、可行的。  相似文献   

18.
胡超芳  宗群  孙连坤 《计算机工程》2010,36(24):102-103
设计具有带宽约束的网络控制器,采用带时倚强度的泊松过程形成随机通信逻辑调度策略,实现系统状态的有限次更新,根据其马尔科夫跳变本质,基于更新时刻特性,协同设计控制器。仿真结果表明,引入随机通信逻辑能减少状态更新的次数,降低网络带宽对控制性能的影响,提高系统的动态性能。  相似文献   

19.
On designing of sliding-mode control for stochastic jump systems   总被引:7,自引:0,他引:7  
In this note, we consider the problems of stochastic stability and sliding-mode control for a class of linear continuous-time systems with stochastic jumps, in which the jumping parameters are modeled as a continuous-time, discrete-state homogeneous Markov process with right continuous trajectories taking values in a finite set. By using Linear matrix inequalities (LMIs) approach, sufficient conditions are proposed to guarantee the stochastic stability of the underlying system. Then, a reaching motion controller is designed such that the resulting closed-loop system can be driven onto the desired sliding surface in a limited time. It has been shown that the sliding mode control problem for the Markovian jump systems is solvable if a set of coupled LMIs have solutions. A numerical example is given to show the potential of the proposed techniques.  相似文献   

20.
李顺祥  田彦涛 《控制工程》2004,11(4):325-328
根据混合系统离散状态的动态行为和Markov链的状态也是离散的特点,提出了一类离散状态的动态行为是Markov链的混合系统。与传统的混合系统相比,这类系统能够刻画出混合系统离散动态行为的随机性,可以用来描述系统受到外界环境因素制约和内部突发事件等随机因素影响而发生变化的动态行为。根据动态系统的稳定性定义以及随机过程理论,给出了Markov线性切换系统的随机稳定性定义,并且分析了Markov线性切换系统的随机稳定性问题,给出了判定随机稳定性的充分必要条件。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号