首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A stochastic control scheme is developed for scalar, discrete-time, and linear-dynamic systems driven by Cauchy distributed process and measurement noises. When addressing the optimal control problem for such systems, the standard quadratic cost criteria cannot be used. In this study we introduce a new objective function that is functionally similar to the Cauchy probability density function. The performance index, defined as the expectation of this objective function with respect to the Cauchy densities, exists. The dynamic programming solution to the fixed and finite horizon optimal control problem that uses this performance index appears to be intractable. Therefore, a moving horizon optimal model predictive control problem is implemented, for which the conditional expected value of the objective function and its gradients can be computed in closed   form and without assumptions such as certainty equivalence. Numerical results are shown for this mm-step model predictive optimal controller and compared to a similar, Linear–Exponential–Gaussian model predictive controller. An essential difference between the Cauchy and Gaussian controllers when applied to a system with Cauchy noises is that, while the Gaussian controller is linear and reacts strongly to all noise pulses, the Cauchy controller can differentiate between measurement and process noise pulses by ignoring the former while responding to the latter. This property of the Cauchy controller occurs when an impulsive measurement noise is more likely than an impulsive process noise. The Cauchy and Gaussian controllers react similarly when applied to a system with Gaussian noises, demonstrating the robustness of the proposed control scheme.  相似文献   

2.
This paper is concerned with the high-performance robust control of discrete-time linear time-invariant (LTI) systems with semi-algebraic uncertainty regions. It is assumed that a robustly stabilizing static controller is given whose gain depends polynomially on the uncertain variables. The problem of tuning this parameter-dependent gain with respect to a prescribed quadratic cost function is formulated as a sum-of-squares (SOS) optimization. This method leads to a near-optimal controller whose performance is better than that of the initial controller. It is shown that the results derived in the present work encompass the ones obtained in a recent paper. The efficacy of the results is elucidated by an example.  相似文献   

3.
The problem of minimizing a weighted sum of the input and output variances from a linear scalar system is considered. This is viewed as an optimization problem with the parameters of the regulator as unknowns and it is proved that if the regulator is flexible enough, then every local minimum to this problem is a global optimum. This result is useful if a gradient method is used to find the optimal regulator.  相似文献   

4.
ABSTRACT

In this paper, the preview control problem for a class of linear continuous time stochastic systems with multiplicative noise is studied based on the augmented error system method. First, a deterministic assistant system is introduced, and the original system is translated to the assistant system. Then, the integrator is employed to ensure the output of the closed-loop system tracking the reference signal accurately. Second, the augmented error system, which includes integrator vector, control vector and reference signal, is constructed based on the system after translation. As a result, the tracking problem is transformed into the optimal control problem of the augmented error system, and the optimal control input is obtained by the dynamic programming method. This control input is regarded as the preview controller of the original system. For a linear stochastic system with multiplicative noise, the difficulty being unable to construct an augmented error system by the derivation method is solved in this paper. And, the existence and uniqueness solution of the Riccati equation corresponding to the stochastic augmented error system is discussed. The numerical simulations show that the preview controller designed in this paper is very effective.  相似文献   

5.
In this paper, for one-dimensional stochastic linear fractional systems in terms of the Riemann–Liouville fractional derivative, the optimal control is derived. It is assumed that the state is completely observable and all the information regarding this is available. The formulation leads to a set of stochastic fractional forward and backward equation in the Riemann–Liouville sense. The proposed method has been checked via some numerical simulations which show the effectiveness of the fractional stochastic optimal algorithm.  相似文献   

6.
The aim of the present paper is to provide an optimal solution to the H2 state-feedback and output-feedback control problems for stochastic linear systems subjected both to Markov jumps and to multiplicative white noise. It is proved that in the state-feedback case the optimal solution is a static gain which is also optimal in the class of all higher-order controllers. In the output-feedback case the optimal H2 controller has the same order as the given stochastic system. The realization of the optimal controllers depend on the stabilizing solutions of some appropriate systems of Riccati-type coupled equations. An effective iterative convergent algorithm to compute these stabilizing solutions is also presented. The paper gives some illustrative numerical example allowing to compare the results obtained by the proposed design approach with the ones presented in the recent control literature.  相似文献   

7.
The stochastic regulation problem for linear systems with state- and control-dependent noise and a noisy linear output equation is considered. The optimal quadratic cost output-feedback control law in a class of linear controllers is found. This problem was first addressed in the early 1970s and solved, in the complete information case, by Wonham. In this paper we give the solution of the problem in the incomplete information case, that is, for a linear output equation corrupted by Gaussian noise. Moreover, a different method is used here, giving the solution in a more direct way even in the complete information case.  相似文献   

8.
When designing optimal controllers for any system, it is often the case that the true state of the system is unknown to the controller. Imperfect state information must be taken into account in the controller’s design in order to preserve its optimality. The same is true when performing reachability calculations. To estimate the probability that the state of a stochastic system reaches, or stays within, some set of interest in a given time horizon, it is necessary to find a controller that drives the system to that set with maximum probability, given the controller’s knowledge of the true state of the system. To date, little work has been done on stochastic reachability calculations with partially observable states. The work that has been done relies on converting the reachability optimization problem to one with an additive cost function, for which theoretical results are well known. Our approach is to preserve the multiplicative cost structure when deriving a sufficient statistic that reduces the problem to one of perfect state information. Our transformation includes a change of measure that simplifies the distribution of the sufficient statistic conditioned on its previous value. We develop a dynamic programming recursion for the solution of the equivalent perfect information problem, proving that the recursion is valid, an optimal solution exists, and results in the same solution as to the original problem. We also show that our results are equivalent to those for the reformulated additive cost problem, and so such a reformulation is not required.  相似文献   

9.
We consider a reach–avoid specification for a stochastic hybrid dynamical system defined as reaching a goal set at some finite time, while avoiding an unsafe set at all previous times. In contrast with earlier works which consider the target and avoid sets as deterministic, we consider these sets to be probabilistic. An optimal control policy is derived which maximizes the reach–avoid probability. Special structure on the stochastic sets is exploited to make the computation tractable for large space dimensions.  相似文献   

10.
This paper deals with the design of a controller possessing tracking capability of any realisable reference trajectory while rejecting measurement noise. We consider discrete-time-varying multi-input multi-output stable linear systems and a proportional-integral-derivative (PID) controller. A novel recursive algorithm estimating the time-varying PID gains is proposed. The development of the proposed algorithm is based on minimising a stochastic performance index. The implementation of the proposed algorithm is described and boundedness of trajectories and convergence characteristics are presented for a discretised continuous-time model. Simulation results are included to illustrate the performance capabilities of the proposed algorithm.  相似文献   

11.
T. Sasagawa  J.L. Willems 《Automatica》1996,32(12):1741-1747
For deterministic time-invariant linear systems, stability results are quite simple. For stochastic systems, however, even for linear ones, they are rather complicated. In this paper, some results on second mean stability (mean square stability) of time-invariant linear systems with multiplicative noise are summarized and the parametrization method of getting an exact bound for pth mean stability (p ≥ 2) via second mean stability is stated. Moreover, relations between pth mean stabilities for various values of p are given. On the basis of these relations, a simpler method for getting sufficient conditions for pth mean stability is also given, though the resulting sufficient bound is, of course, more conservative. Comparative studies of various conditions are made by using examples.  相似文献   

12.
A well-known result in linear control theory is the so-called “small gain” theorem stating that if given two plants with transfer matrix functions T1 and T2 in H such that T1 < γ and T2 < 1/γ, when coupling T2 to T1 such that u2 = y1 and u1 = y2 one obtains an internally stable closed system. The aim of the present paper is to describe a corresponding result for stochastic systems with state-dependent white noise.  相似文献   

13.
This paper discusses the infinite time horizon nonzero-sum linear quadratic (LQ) differential games of stochastic systems governed by Itoe's equation with state and control-dependent noise. First, the nonzero-sum LQ differential games are formulated by applying the results of stochastic LQ problems. Second, under the assumption of mean-square stabilizability of stochastic systems, necessary and sufficient conditions for the existence of the Nash strategy are presented by means of four coupled stochastic algebraic Riccati equations. Moreover, in order to demonstrate the usefulness of the obtained results, the stochastic H-two/H-infinity control with state, control and external disturbance-dependent noise is discussed as an immediate application.  相似文献   

14.
A multistage stochastic programming formulation is presented for monthly production planning of a hydro-thermal system. Stochasticity from variations in water reservoir inflows and fluctuations in demand of electric energy are considered explicitly. The problem can be solved efficiently via Nested Benders Decomposition. The solution is implemented in a model predictive control setup and performance of this control technique is demonstrated in simulations. Tuning parameters, such as prediction horizon and shape of the stochastic programming tree are identified and their effects are analyzed.  相似文献   

15.
Computational models for the neural control of movement must take into account the properties of sensorimotor systems, including the signal-dependent intensity of the noise and the transmission delay affecting the signal conduction. For this purpose, this paper presents an algorithm for model-based control and estimation of a class of linear stochastic systems subject to multiplicative noise affecting the control and feedback signals. The state estimator based on Kalman filtering is allowed to take into account the current feedback to compute the current state estimate. The optimal feedback control process is adapted accordingly. The resulting estimation error is smaller than the estimation error obtained when the current state must be predicted based on the last feedback signal, which reduces variability of the simulated trajectories. In particular, the performance of the present algorithm is good in a range of feedback delay that is compatible with the delay induced by the neural transmission of the sensory inflow.  相似文献   

16.
Xinmin  Huanshui  Lihua   《Automatica》2009,45(9):2067-2073
This paper considers the stochastic LQR problem for systems with input delay and stochastic parameter uncertainties in the state and input matrices. The problem is known to be difficult due to the presence of interactions among the delayed input channels and the stochastic parameter uncertainties in the channels. The key to our approach is to convert the LQR control problem into an optimization one in a Hilbert space for an associated backward stochastic model and then obtain the optimal solution to the stochastic LQR problem by exploiting the dynamic programming approach. Our solution is given in terms of two generalized Riccati difference equations (RDEs) of the same dimension as that of the plant.  相似文献   

17.
This paper discusses the design of the optimal preview controller for a linear continuous-time stochastic control system in finite-time horizon, using the method of augmented error system. First, an assistant system is introduced for state shifting. Then, in order to overcome the difficulty of the state equation of the stochastic control system being unable to be differentiated because of Brownian motion, the integrator is introduced. Thus, the augmented error system which contains the integrator vector, control input, reference signal, error vector and state of the system is reconstructed. This leads to the tracking problem of the optimal preview control of the linear stochastic control system being transformed into the optimal output tracking problem of the augmented error system. With the method of dynamic programming in the theory of stochastic control, the optimal controller with previewable signals of the augmented error system being equal to the controller of the original system is obtained. Finally, numerical simulations show the effectiveness of the controller.  相似文献   

18.
In this paper, we deal with discrete-time linear periodic/time-invariant systems with polytopic-type uncertainties and propose a new linear matrix inequality (LMI)-based method for robust state-feedback controller synthesis. In stark contrast with existing approaches that are confined to memoryless static controller synthesis, we explore dynamical controller synthesis and reveal a particular periodically time-varying memory state-feedback controller (PTVMSFC) structure that allows LMI-based synthesis. In the context of robust controller synthesis, we prove rigorously that the proposed design method encompasses the well-known extended-LMI-based static controller synthesis methods as particular cases. Through numerical experiments, we demonstrate that the suggested design method is indeed effective in achieving less conservative results, under both periodic and time-invariant settings. We finally derive a viable test to verify that the designed robust PTVMSFC is “exact” in the sense that it attains the best achievable robust performance. This exactness verification test works fine in practice, and we will show via a numerical example that exact robust control is indeed attained by designing PTVMSFCs, even for such a problem where the standard memoryless static state-feedback fails.  相似文献   

19.
In this work, probabilistic reachability over a finite horizon is investigated for a class of discrete time stochastic hybrid systems with control inputs. A suitable embedding of the reachability problem in a stochastic control framework reveals that it is amenable to two complementary interpretations, leading to dual algorithms for reachability computations. In particular, the set of initial conditions providing a certain probabilistic guarantee that the system will keep evolving within a desired ‘safe’ region of the state space is characterized in terms of a value function, and ‘maximally safe’ Markov policies are determined via dynamic programming. These results are of interest not only for safety analysis and design, but also for solving those regulation and stabilization problems that can be reinterpreted as safety problems. The temperature regulation problem presented in the paper as a case study is one such case.  相似文献   

20.
Congestion control as a stochastic control problem with action delays   总被引:6,自引:0,他引:6  
Eitan  Tamer  R.   《Automatica》1999,35(12):1937-1950
We consider the design of explicit rate-based congestion control for high-speed communication networks and show that this can be formulated as a stochastic control problem where the controls of different users enter the system dynamics with different delays. We discuss the existence, derivation and the structure of the optimal controller, as well as of suboptimal controllers of the certainty-equivalent type — a terminology that is precisely defined in the paper for the specific context of the congestion control problem considered. We consider, in particular, two certainty-equivalent controllers which are easy to implement, and show that they are stabilizing, i.e., they lead to bounded infinite-horizon average cost, and stable queue dynamics. Further, these controllers perform well in simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号