首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A stochastic optimal control method for nonlinear hysteretic systems under exter-nally and/or parametrically random excitations is presented and illustrated with an example ofhysteretic column system.A hysteretic system subject to random excitation is first replaced bya nonlinear non-hysteretic stochastic system.An It stochastic differential equation for the to-tal energy of the system as a one-dimensional controlled diffusion process is derived by usingthe stochastic averaging method of energy envelope.A dynamical programming equation is thenestablished based on the stochastic dynamical programming principle and solved to yield the op-timal control force.Finally,the responses of uncontrolled and controlled systems are evaluatedto determine the control efficacy.It is shown by numerical results that the proposed stochasticoptimal control method is more effective and efficient than other optimal control methods.  相似文献   

2.
The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and stiffness by the energy dissipation balance technique. The controlled system is transformed to the equivalent nonhysteretic system. Stochastic averaging is then implemented to obtain the Itô stochastic equation associated with the total energy of the vibrating system, appropriate for evaluating system responses. Dynamical programming equations for maximizing system reliability are formulated by the dynamical programming principle. The optimal bounded control is derived from the maximization condition in the dynamical programming equation. Finally, the conditional reliability function and mean time of first-passage failure of the optimal Duhem systems are numerically solved from the Kolmogorov equations. The proposed procedure is illustrated with a representative example.  相似文献   

3.
A strategy is proposed based on the stochastic averaging method for quasi nonintegrable Hamiltonian systems and the stochastic dynamical programming principle. The proposed strategy can be used to design nonlinear stochastic optimal control to minimize the response of quasi non-integrable Hamiltonian systems subject to Gaussian white noise excitation. By using the stochastic averaging method for quasi non-integrable Hamiltonian systems the equations of motion of a controlled quasi non-integrable Hamiltonian system is reduced to a one-dimensional averaged Ito stochastic differential equation. By using the stochastic dynamical programming principle the dynamical programming equation for minimizing the response of the system is formulated.The optimal control law is derived from the dynamical programming equation and the bounded control constraints. The response of optimally controlled systems is predicted through solving the FPK equation associated with It5 stochastic differential equation. An example is worked out in detail to illustrate the application of the control strategy proposed.  相似文献   

4.
A nonlinear stochastic optimal control strategy for minimizing the first-passage failure of quasi integrable Hamiltonian systems (multi-degree-of-freedom integrable Hamiltonian systems subject to light dampings and weakly random excitations) is proposed. The equations of motion for a controlled quasi integrable Hamiltonian system are reduced to a set of averaged Itô stochastic differential equations by using the stochastic averaging method. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximization of reliability and mean first-passage time are formulated. The optimal control law is derived from the dynamical programming equations and the control constraints. The final dynamical programming equations for these control problems are determined and their relationships to the backward Kolmogorov equation governing the conditional reliability function and the Pontryagin equation governing the mean first-passage time are separately established. The conditional reliability function and the mean first-passage time of the controlled system are obtained by solving the final dynamical programming equations or their equivalent Kolmogorov and Pontryagin equations. An example is presented to illustrate the application and effectiveness of the proposed control strategy.  相似文献   

5.
Zhu  W. Q.  Deng  M. L. 《Nonlinear dynamics》2004,35(1):81-100
A strategy for designing optimal bounded control to minimize theresponse of quasi non-integrable Hamiltonian systems is proposed basedon the stochastic averaging method for quasi non-integrable Hamiltoniansystems and the stochastic dynamical programming principle. Theequations of motion of a controlled quasi non-integrable Hamiltoniansystem are first reduced to an one-dimensional averaged Itô stochasticdifferential equation for the Hamiltonian by using the stochasticaveraging method for quasi non-integrable Hamiltonian systems. Then, thedynamical programming equation for the control problem of minimizing theresponse of the averaged system is formulated based on the dynamicalprogramming principle. The optimal control law is derived from thedynamical programming equation and control constraints without solvingthe equation. The response of optimally controlled systems is predictedthrough solving the Fokker–Planck–Kolmogrov (FPK) equation associatedwith completely averaged Itô equation. Finally, two examples are workedout in detail to illustrate the application and effectiveness of theproposed control strategy.  相似文献   

6.
A bounded optimal control strategy for strongly non-linear systems under non-white wide-band random excitation with actuator saturation is proposed. First, the stochastic averaging method is introduced for controlled strongly non-linear systems under wide-band random excitation using generalized harmonic functions. Then, the dynamical programming equation for the saturated control problem is formulated from the partially averaged Itō equation based on the dynamical programming principle. The optimal control consisting of the unbounded optimal control and the bounded bang-bang control is determined by solving the dynamical programming equation. Finally, the response of the optimally controlled system is predicted by solving the reduced Fokker-Planck-Kolmogorov (FPK) equation associated with the completed averaged Itō equation. An example is given to illustrate the proposed control strategy. Numerical results show that the proposed control strategy has high control effectiveness and efficiency and the chattering is reduced significantly comparing with the bang-bang control strategy.  相似文献   

7.
In this paper two different control strategies designed to alleviate the response of quasi partially integrable Hamiltonian systems subjected to stochastic excitation are proposed. First, by using the stochastic averaging method for quasi partially integrable Hamiltonian systems, an n-DOF controlled quasi partially integrable Hamiltonian system with stochastic excitation is converted into a set of partially averaged Itô stochastic differential equations. Then, the dynamical programming equation associated with the partially averaged Itô equations is formulated by applying the stochastic dynamical programming principle. In the first control strategy, the optimal control law is derived from the dynamical programming equation and the control constraints without solving the dynamical programming equation. In the second control strategy, the optimal control law is obtained by solving the dynamical programming equation. Finally, both the responses of controlled and uncontrolled systems are predicted through solving the Fokker-Plank-Kolmogorov equation associated with fully averaged Itô equations. An example is worked out to illustrate the application and effectiveness of the two proposed control strategies.  相似文献   

8.
An optimal vibration control strategy for partially observable nonlinear quasi Hamiltonian systems with actuator saturation is proposed. First,a controlled partially observable non-linear system is converted into a completely observable linear control system of finite dimension based on the theorem due to Charalambous and Elliott. Then the partially averaged It stochastic differential equations and dynamical programming equation associated with the completely observable linear system are derived by using the stochastic averaging method and stochastic dynamical programming principle,respectively. The optimal control law is obtained from solving the final dynamical programming equation. The results show that the proposed control strategy has high control effectiveness and control effciency.  相似文献   

9.
Zhu  W. Q.  Deng  M. L.  Huang  Z. L. 《Nonlinear dynamics》2003,33(2):189-207
The optimal bounded control of quasi-integrable Hamiltonian systems with wide-band random excitation for minimizing their first-passage failure is investigated. First, a stochastic averaging method for multi-degrees-of-freedom (MDOF) strongly nonlinear quasi-integrable Hamiltonian systems with wide-band stationary random excitations using generalized harmonic functions is proposed. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximizinig reliability and maximizing mean first-passage time are formulated based on the averaged Itô equations by applying the dynamical programming principle. The optimal control law is derived from the dynamical programming equations and control constraints. The relationship between the dynamical programming equations and the backward Kolmogorov equation for the conditional reliability function and the Pontryagin equation for the conditional mean first-passage time of optimally controlled system is discussed. Finally, the conditional reliability function, the conditional probability density and mean of first-passage time of an optimally controlled system are obtained by solving the backward Kolmogorov equation and Pontryagin equation. The application of the proposed procedure and effectiveness of control strategy are illustrated with an example.  相似文献   

10.
A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed.The nonlinear equation of cable motion in plane is derived and reduced to the equations for the first two modes of cable vibration by using the Galerkin method.The partially averaged Ito equation for controlled system energy is further derived by applying the stochastic averaging method for quasi-non-integrable Hamiltonian systems.The dynamical programming equation for the controlled system energy with a performance index is established by applying the stochastic dynamical programming principle and a stochastic optimal control law is obtained through solving the dynamical programming equation.A bilinear controller by using the direct method of Lyapunov is introduced.The comparison between the two controllers shows that the proposed stochastic optimal control strategy is superior to the bilinear control strategy in terms of higher control effectiveness and efficiency.  相似文献   

11.
An optimal bounded control strategy for smart structure systems as controlled Hamiltonian systems with random excitations and noised observations is proposed. The basic dynamic equations for a smart structure system with smart sensors and actuators are firstly given. The nonlinear stochastic control system with noised observations is then obtained from the simplified smart structure system, and the system is expressed by generalized Hamiltonian equations with control, random excitation and dissipative forces. The optimal control problem for nonlinear stochastic systems with noised observations includes two parts: optimal state estimation and optimal response control based on estimated states, which are coupled each other. The probability density of optimally estimated systems has generally infinite dimensions based on the separation theorem. The proposed optimal control strategy gives an approximate separate solution. First, the optimally estimated system state is determined by the observations based on the extended Kalman filter, and the estimated nonlinear system with controls and stochastic excitations is obtained which has finite-dimensional probability density. Second, the dynamical programming equation for the estimated system is determined based on the stochastic dynamical programming principle. The control boundedness due to actuator saturation is considered, and the optimal bounded control law is obtained by the programming equation with the bounded control constraint. The optimal control depends on the estimated system state which is determined by noised observations. The proposed optimal bounded control strategy is finally applied to a single-degree-of-freedom nonlinear stochastic system with control and noised observation. The remarkable vibration control effectiveness is illustrated with numerical results. Thus the proposed optimal bounded control strategy is promising for application to nonlinear stochastic smart structure systems with noised observations.  相似文献   

12.
拟哈密顿系统非线性随机最优控制   总被引:2,自引:0,他引:2  
主要介绍近十几年来拟哈密顿系统非线性随机最优控制理论方法及其应用的研究成果, 包括基于拟哈密顿系统随机平均法与随机动态规划原理的非线性随机最优控制基本策略, 即响应极小化控制、随机稳定化、首次穿越损坏最小化控制、以概率密度为目标的控制, 为将它们应用于工程实际而作的部分可观测系统最优控制、有界控制、时滞控制、半主动控制、极小极大控制的进一步研究, 以及综合考虑这些实际问题的非线性随机最优控制的综合策略, 非线性随机最优控制在滞迟系统、分数维系统等中的若干应用, 介绍与这些研究有关的背景, 并指出今后有待进一步研究的问题.  相似文献   

13.
A procedure for designing optimal bounded control to minimize the response of quasi-integrable Hamiltonian systems is proposed based on the stochastic averaging method for quasi-integrable Hamiltonian systems and the stochastic dynamical programming principle. The equations of motion of a controlled quasi-integrable Hamiltonian system are first reduced to a set of partially completed averaged Itô stochastic differential equations by using the stochastic averaging method for quasi-integrable Hamiltonian systems. Then, the dynamical programming equation for the control problems of minimizing the response of the averaged system is formulated based on the dynamical programming principle. The optimal control law is derived from the dynamical programming equation and control constraints without solving the dynamical programming equation. The response of optimally controlled systems is predicted through solving the Fokker-Planck-Kolmogrov equation associated with fully completed averaged Itô equations. Finally, two examples are worked out in detail to illustrate the application and effectiveness of the proposed control strategy.  相似文献   

14.
A stochastic minimax semi-active control strategy for multi-degrees-of-freedom (MDOF) strongly nonlinear systems under combined harmonic and wide-band noise excitations is proposed. First, a stochastic averaging procedure is introduced for controlled uncertain strongly nonlinear systems using generalized harmonic functions and the control forces produced by Magneto-rheological (MR) dampers are split into the passive part and the active part. Then, a worst-case optimal control strategy is derived by solving a stochastic differential game problem. The worst-case disturbances and the optimal semi-active controls are obtained by solving the Hamilton–Jacobi–Isaacs (HJI) equations with the constraints of disturbance bounds and MR damper dynamics. Finally, the responses of optimally controlled MDOF nonlinear systems are predicted by solving the Fokker–Planck–Kolmogorov (FPK) equation associated with the fully averaged Itô equations. Two examples are worked out in detail to illustrate the proposed control strategy. The effectiveness of the proposed control strategy is verified by using the results from Monte Carlo simulation.  相似文献   

15.
A new bounded optimal control strategy for multi-degree-of-freedom (MDOF) quasi nonintegrable-Hamiltonian systems with actuator saturation is proposed. First, an n-degree-of-freedom (n-DOF) controlled quasi nonintegrable-Hamiltonian system is reduced to a partially averaged Itô stochastic differential equation by using the stochastic averaging method for quasi nonintegrable-Hamiltonian systems. Then, a dynamical programming equation is established by using the stochastic dynamical programming principle, from which the optimal control law consisting of optimal unbounded control and bang–bang control is derived. Finally, the response of the optimally controlled system is predicted by solving the Fokker–Planck–Kolmogorov (FPK) equation associated with the fully averaged Itô equation. An example of two controlled nonlinearly coupled Duffing oscillators is worked out in detail. Numerical results show that the proposed control strategy has high control effectiveness and efficiency and that chattering is reduced significantly compared with the bang–bang control strategy.  相似文献   

16.
采用SSS(state-space-split)法,建立了引入Bouc-Wen滞回模型的杜芬非线性系统在高斯白噪声激励下的概率密度函数(PDF)的近似求解方法,分析了其随机动力响应变化规律.首先,将Bouc-Wen滞回模型引入杜芬非线性系统,分别考虑非线性系统中的几何非线性和材料非线性对动力响应的影响.随后,对该模型进...  相似文献   

17.
A stochastic fractional optimal control strategy for quasi-integrable Hamiltonian systems with fractional derivative damping is proposed. First, equations of the controlled system are reduced to a set of partially averaged It $\hat{o}$ stochastic differential equations for the energy processes by applying the stochastic averaging method for quasi-integrable Hamiltonian systems and a stochastic fractional optimal control problem (FOCP) of the partially averaged system for quasi-integrable Hamiltonian system with fractional derivative damping is formulated. Then the dynamical programming equation for the ergodic control of the partially averaged system is established by using the stochastic dynamical programming principle and solved to yield the fractional optimal control law. Finally, an example is given to illustrate the application and effectiveness of the proposed control design procedure.  相似文献   

18.
滞迟系统属于一类典型的强非线性系统,滞迟力不仅取决于系统的瞬时变形,还与变形历程有关.虽然滞迟系统的随机振动问题已被广泛研究,但至今尚未得到滞迟系统随机响应概率密度函数的精确闭合解.本文运用迭代加权残值法获得了高斯白噪声激励下Bouc-Wen滞迟系统稳态响应概率密度函数的近似闭合解.首先,运用等效线性化法求出系统的稳态高斯概率密度函数;然后以此构造权函数,应用加权残值法求得了系统指数多项式形式的非高斯概率密度函数;最后引入迭代的过程,逐步优化权函数,提高计算所得结果的精度.以随机地震激励下钢纤维陶粒混凝土结构的稳态响应作为算例,其中Bouc-Wen模型的参数是基于拟静力学试验数据,并应用最小二乘法辨识获得.与Monte Carlo模拟结果相比,等效线性化法得到的结果精度较差;由加权残值法得到的结果能够表现出非线性特征,但其精度依然无法令人满意;采用迭代加权残值法得到的近似闭合解与Monte Carlo模拟的结果吻合非常好;对于较强随机激励情形,采用渐进迭代加权残值法具有较高的求解效率,所获得的理论解析解具有较高的精度.结果表明,所获得的近似闭合解不仅对于土木工程领域具有重要的实际应用价值,而且还可作为检验其他非线性系统随机响应预测方法的精度的标准.  相似文献   

19.
A procedure for designing a feedback control to asymptotically stabilize in probability a quasi non-integrable Hamiltonion system is proposed. First, an one-dimensional averaged Itô stochastic differential equation for controlled Hamiltonian is derived from given equations of motion of the system by using the stochastic averaging method for quasi non-integrable Hamiltonian systems. Second, a dynamical programming equation for an ergodic control problem with undetermined cost function is established based on the stochastic dynamical programming principle and solved to yield the optimal control law. Third, the asymptotic stability in probability of the system is analysed by examining the sample behaviors of the completely averaged Itô differential equation at its two boundaries. Finally, the cost function and the optimal control forces are determined by the requirement of stabilizing the system. Two examples are given to illustrate the application of the proposed procedure and the effect of control on the stability of the system.  相似文献   

20.
Zhu  W. Q.  Huang  Z. L. 《Nonlinear dynamics》2003,33(2):209-224
A procedure for designing a feedback control to asymptoticallystabilize, with probability one, a quasi-partially integrableHamiltonian system is proposed. First, the averaged stochasticdifferential equations for controlled r first integrals are derived fromthe equations of motion of a given system by using the stochasticaveraging method for quasi-partially integrable Hamiltonian systems.Second, a dynamical programming equation for the ergodic control problemof the averaged system with undetermined cost function is establishedbased on the dynamical programming principle. The optimal control law isderived from minimizing the dynamical programming equation with respectto control. Third, the asymptotic stability with probability one of theoptimally controlled system is analyzed by evaluating the maximalLyapunov exponent of the completely averaged Itô equations for the rfirst integrals. Finally, the cost function and optimal control forces aredetermined by the requirements of stabilizing the system. An example isworked out in detail to illustrate the application of the proposedprocedure and the effect of optimal control on the stability of thesystem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号