首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper two different control strategies designed to alleviate the response of quasi partially integrable Hamiltonian systems subjected to stochastic excitation are proposed. First, by using the stochastic averaging method for quasi partially integrable Hamiltonian systems, an n-DOF controlled quasi partially integrable Hamiltonian system with stochastic excitation is converted into a set of partially averaged Itô stochastic differential equations. Then, the dynamical programming equation associated with the partially averaged Itô equations is formulated by applying the stochastic dynamical programming principle. In the first control strategy, the optimal control law is derived from the dynamical programming equation and the control constraints without solving the dynamical programming equation. In the second control strategy, the optimal control law is obtained by solving the dynamical programming equation. Finally, both the responses of controlled and uncontrolled systems are predicted through solving the Fokker-Plank-Kolmogorov equation associated with fully averaged Itô equations. An example is worked out to illustrate the application and effectiveness of the two proposed control strategies.  相似文献   

2.
A procedure for designing a feedback control to asymptotically stabilize in probability a quasi non-integrable Hamiltonion system is proposed. First, an one-dimensional averaged Itô stochastic differential equation for controlled Hamiltonian is derived from given equations of motion of the system by using the stochastic averaging method for quasi non-integrable Hamiltonian systems. Second, a dynamical programming equation for an ergodic control problem with undetermined cost function is established based on the stochastic dynamical programming principle and solved to yield the optimal control law. Third, the asymptotic stability in probability of the system is analysed by examining the sample behaviors of the completely averaged Itô differential equation at its two boundaries. Finally, the cost function and the optimal control forces are determined by the requirement of stabilizing the system. Two examples are given to illustrate the application of the proposed procedure and the effect of control on the stability of the system.  相似文献   

3.
A nonlinear stochastic optimal control strategy for minimizing the first-passage failure of quasi integrable Hamiltonian systems (multi-degree-of-freedom integrable Hamiltonian systems subject to light dampings and weakly random excitations) is proposed. The equations of motion for a controlled quasi integrable Hamiltonian system are reduced to a set of averaged Itô stochastic differential equations by using the stochastic averaging method. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximization of reliability and mean first-passage time are formulated. The optimal control law is derived from the dynamical programming equations and the control constraints. The final dynamical programming equations for these control problems are determined and their relationships to the backward Kolmogorov equation governing the conditional reliability function and the Pontryagin equation governing the mean first-passage time are separately established. The conditional reliability function and the mean first-passage time of the controlled system are obtained by solving the final dynamical programming equations or their equivalent Kolmogorov and Pontryagin equations. An example is presented to illustrate the application and effectiveness of the proposed control strategy.  相似文献   

4.
A new bounded optimal control strategy for multi-degree-of-freedom (MDOF) quasi nonintegrable-Hamiltonian systems with actuator saturation is proposed. First, an n-degree-of-freedom (n-DOF) controlled quasi nonintegrable-Hamiltonian system is reduced to a partially averaged Itô stochastic differential equation by using the stochastic averaging method for quasi nonintegrable-Hamiltonian systems. Then, a dynamical programming equation is established by using the stochastic dynamical programming principle, from which the optimal control law consisting of optimal unbounded control and bang–bang control is derived. Finally, the response of the optimally controlled system is predicted by solving the Fokker–Planck–Kolmogorov (FPK) equation associated with the fully averaged Itô equation. An example of two controlled nonlinearly coupled Duffing oscillators is worked out in detail. Numerical results show that the proposed control strategy has high control effectiveness and efficiency and that chattering is reduced significantly compared with the bang–bang control strategy.  相似文献   

5.
The asymptotic Lyapunov stability with probability one of multi-degree-of-freedom quasi linear systems subject to multi-time-delayed feedback control and multiplicative (parametric) excitation of wide-band random process is studied. First, the multi-time-delayed feedback control forces are expressed approximately in terms of the system state variables without time delay and the system is converted into ordinary quasi linear system. Then, the averaged Itô stochastic differential equations are derived by using the stochastic averaging method for quasi linear systems and the expression for the largest Lyapunov exponent of the linearized averaged Itô equations is derived. Finally, the necessary and sufficient condition for the asymptotic Lyapunov stability with probability one of the trivial solution of the original system is obtained approximately by letting the largest Lyapunov exponent to be negative. An example is worked out in detail to illustrate the application and validity of the proposed procedure and to show the effect of the time delay in feedback control on the largest Lyapunov exponent and the stability of system.  相似文献   

6.
Zhu  W. Q.  Huang  Z. L. 《Nonlinear dynamics》2003,33(2):209-224
A procedure for designing a feedback control to asymptoticallystabilize, with probability one, a quasi-partially integrableHamiltonian system is proposed. First, the averaged stochasticdifferential equations for controlled r first integrals are derived fromthe equations of motion of a given system by using the stochasticaveraging method for quasi-partially integrable Hamiltonian systems.Second, a dynamical programming equation for the ergodic control problemof the averaged system with undetermined cost function is establishedbased on the dynamical programming principle. The optimal control law isderived from minimizing the dynamical programming equation with respectto control. Third, the asymptotic stability with probability one of theoptimally controlled system is analyzed by evaluating the maximalLyapunov exponent of the completely averaged Itô equations for the rfirst integrals. Finally, the cost function and optimal control forces aredetermined by the requirements of stabilizing the system. An example isworked out in detail to illustrate the application of the proposedprocedure and the effect of optimal control on the stability of thesystem.  相似文献   

7.
Zhu  W. Q.  Deng  M. L. 《Nonlinear dynamics》2004,35(1):81-100
A strategy for designing optimal bounded control to minimize theresponse of quasi non-integrable Hamiltonian systems is proposed basedon the stochastic averaging method for quasi non-integrable Hamiltoniansystems and the stochastic dynamical programming principle. Theequations of motion of a controlled quasi non-integrable Hamiltoniansystem are first reduced to an one-dimensional averaged Itô stochasticdifferential equation for the Hamiltonian by using the stochasticaveraging method for quasi non-integrable Hamiltonian systems. Then, thedynamical programming equation for the control problem of minimizing theresponse of the averaged system is formulated based on the dynamicalprogramming principle. The optimal control law is derived from thedynamical programming equation and control constraints without solvingthe equation. The response of optimally controlled systems is predictedthrough solving the Fokker–Planck–Kolmogrov (FPK) equation associatedwith completely averaged Itô equation. Finally, two examples are workedout in detail to illustrate the application and effectiveness of theproposed control strategy.  相似文献   

8.
A bounded optimal control strategy for strongly non-linear systems under non-white wide-band random excitation with actuator saturation is proposed. First, the stochastic averaging method is introduced for controlled strongly non-linear systems under wide-band random excitation using generalized harmonic functions. Then, the dynamical programming equation for the saturated control problem is formulated from the partially averaged Itō equation based on the dynamical programming principle. The optimal control consisting of the unbounded optimal control and the bounded bang-bang control is determined by solving the dynamical programming equation. Finally, the response of the optimally controlled system is predicted by solving the reduced Fokker-Planck-Kolmogorov (FPK) equation associated with the completed averaged Itō equation. An example is given to illustrate the proposed control strategy. Numerical results show that the proposed control strategy has high control effectiveness and efficiency and the chattering is reduced significantly comparing with the bang-bang control strategy.  相似文献   

9.
A time-delayed stochastic optimal bounded control strategy for strongly non-linear systems under wide-band random excitations with actuator saturation is proposed based on the stochastic averaging method and the stochastic maximum principle. First, the partially averaged Itô equation for the system amplitude is derived by using the stochastic averaging method for strongly non-linear systems. The time-delayed feedback control force is approximated by a control force without time delay based on the periodically random behavior of the displacement and velocity of the system. The partially averaged Itô equation for the system energy is derived from that for the system amplitude by using Itô formula and the relation between system amplitude and system energy. Then, the adjoint equation and maximum condition of the partially averaged control problem are derived based on the stochastic maximum principle. The saturated optimal control force is determined from maximum condition and solving the forward–backward stochastic differential equations (FBSDEs). For infinite time-interval ergodic control, the adjoint variable is stationary process and the FBSDE is reduced to a ordinary differential equation. Finally, the stationary probability density of the Hamiltonian and other response statistics of optimally controlled system are obtained from solving the Fokker–Plank–Kolmogorov (FPK) equation associated with the fully averaged Itô equation of the controlled system. For comparison, the optimal control forces obtained from the time-delayed bang–bang control and the control without considering time delay are also presented. An example is worked out to illustrate the proposed procedure and its advantages.  相似文献   

10.
A strategy is proposed based on the stochastic averaging method for quasi nonintegrable Hamiltonian systems and the stochastic dynamical programming principle. The proposed strategy can be used to design nonlinear stochastic optimal control to minimize the response of quasi non-integrable Hamiltonian systems subject to Gaussian white noise excitation. By using the stochastic averaging method for quasi non-integrable Hamiltonian systems the equations of motion of a controlled quasi non-integrable Hamiltonian system is reduced to a one-dimensional averaged Ito stochastic differential equation. By using the stochastic dynamical programming principle the dynamical programming equation for minimizing the response of the system is formulated.The optimal control law is derived from the dynamical programming equation and the bounded control constraints. The response of optimally controlled systems is predicted through solving the FPK equation associated with It5 stochastic differential equation. An example is worked out in detail to illustrate the application of the control strategy proposed.  相似文献   

11.
The non-linear stochastic optimal control of quasi non-integrable Hamiltonian systems for minimizing their first-passage failure is investigated. A controlled quasi non-integrable Hamiltonian system is reduced to an one-dimensional controlled diffusion process of averaged Hamiltonian by using the stochastic averaging method for quasi non-integrable Hamiltonian systems. The dynamical programming equations and their associated boundary and final time conditions for the problems of maximization of reliability and of maximization of mean first-passage time are formulated. The optimal control law is derived from the dynamical programming equations and the control constraints. The dynamical programming equations for maximum reliability problem and for maximum mean first-passage time problem are finalized and their relationships to the backward Kolmogorov equation for the reliability function and the Pontryagin equation for mean first-passage time, respectively, are pointed out. The boundary condition at zero Hamiltonian is discussed. Two examples are worked out to illustrate the application and effectiveness of the proposed procedure.  相似文献   

12.
A procedure for designing optimal bounded control to minimize the response of quasi-integrable Hamiltonian systems is proposed based on the stochastic averaging method for quasi-integrable Hamiltonian systems and the stochastic dynamical programming principle. The equations of motion of a controlled quasi-integrable Hamiltonian system are first reduced to a set of partially completed averaged Itô stochastic differential equations by using the stochastic averaging method for quasi-integrable Hamiltonian systems. Then, the dynamical programming equation for the control problems of minimizing the response of the averaged system is formulated based on the dynamical programming principle. The optimal control law is derived from the dynamical programming equation and control constraints without solving the dynamical programming equation. The response of optimally controlled systems is predicted through solving the Fokker-Planck-Kolmogrov equation associated with fully completed averaged Itô equations. Finally, two examples are worked out in detail to illustrate the application and effectiveness of the proposed control strategy.  相似文献   

13.
A new procedure for designing optimal bounded control of stochastically excited multi-degree-of-freedom (MDOF) nonlinear viscoelastic systems is proposed based on the stochastic averaging method and the stochastic maximum principle. First, the system is formulated as a quasi-integrable Hamiltonian system with viscoelastic terms and each viscoelastic term is replaced approximately by an elastically restoring force and a visco-damping force based on the randomly periodic behavior of the motion of quasi-integrable Hamiltonian system. Thus, a stochastically excited MDOF nonlinear viscoelastic system is converted to an equivalent quasi-integrable Hamiltonian system without viscoelastic terms. Then, by applying stochastic averaging, the system is further reduced to a partially averaged system of less dimension. The adjoint equation and maximum condition for the optimal control problem of the partially averaged system are derived by using the stochastic maximum principle, and the optimal bounded control force is determined from the maximum condition. Finally, the probability and statistics of the stationary response of optimally controlled system are obtained by solving the Fokker–Plank–Kolmogorov equation (FPK) associated with the fully averaged Itô equation of the controlled system. An example is worked out to illustrate the proposed procedure and its effectiveness.  相似文献   

14.
A new procedure for designing optimal bounded control of quasi-nonintegrable Hamiltonian systems with actuator saturation is proposed based on the stochastic averaging method for quasi-nonintegrable Hamiltonian systems and the stochastic maximum principle. First, the stochastic averaging method for controlled quasi-nonintegrable Hamiltonian systems is introduced. The original control problem is converted into one for a partially averaged equation of system energy together with a partially averaged performance index. Then, the adjoint equation and the maximum condition of the partially averaged control problem are derived based on the stochastic maximum principle. The bounded optimal control forces are obtained from the maximum condition and solving the forward–backward stochastic differential equations (FBSDE). For infinite time-interval ergodic control, the adjoint variable is stationary process, and the FBSDE is reduced to an ordinary differential equation. Finally, the stationary probability density of the Hamiltonian and other response statistics of optimally controlled system are obtained by solving the Fokker–Plank–Kolmogorov equation associated with the fully averaged Itô equation of the controlled system. For comparison, the bang–bang control is also presented. An example of two degree-of-freedom quasi-nonintegrable Hamiltonian system is worked out to illustrate the proposed procedure and its effectiveness. Numerical results show that the proposed control strategy has higher control efficiency and less discontinuous control force than the corresponding bang–bang control at the price of slightly less control effectiveness.  相似文献   

15.
The response of quasi-integrable Hamiltonian systems with delayed feedback bang–bang control subject to Gaussian white noise excitation is studied by using the stochastic averaging method. First, a quasi-Hamiltonian system with delayed feedback bang–bang control subjected to Gaussian white noise excitation is formulated and transformed into the Itô stochastic differential equations for quasi-integrable Hamiltonian system with feedback bang–bang control without time delay. Then the averaged Itô stochastic differential equations for the later system are derived by using the stochastic averaging method for quasi-integrable Hamiltonian systems and the stationary solution of the averaged Fokker–Plank–Kolmogorov (FPK) equation associated with the averaged Itô equations is obtained for both nonresonant and resonant cases. Finally, two examples are worked out in detail to illustrate the application and effectiveness of the proposed method and the effect of time delayed feedback bang–bang control on the response of the systems.  相似文献   

16.
Symmetries and Conserved Quantities of Stochastic Dynamical Control Systems   总被引:1,自引:0,他引:1  
A new definition is given for both exact and quasi symmetries of Itô and Stratonovich dynamical control systems. Determining systems of symmetries for these systems have been obtained and their relation is discussed. It is shown that conserved quantities can be found from both exact and quasi symmetries of stochastic dynamical control systems, which includes Hamiltonian control systems as a special case. Systems which can be controlled via conserved quantities have been investigated. Results have been applied to the control of an N-species stochastic Lotka—Volterra system.  相似文献   

17.
A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed.The nonlinear equation of cable motion in plane is derived and reduced to the equations for the first two modes of cable vibration by using the Galerkin method.The partially averaged Ito equation for controlled system energy is further derived by applying the stochastic averaging method for quasi-non-integrable Hamiltonian systems.The dynamical programming equation for the controlled system energy with a performance index is established by applying the stochastic dynamical programming principle and a stochastic optimal control law is obtained through solving the dynamical programming equation.A bilinear controller by using the direct method of Lyapunov is introduced.The comparison between the two controllers shows that the proposed stochastic optimal control strategy is superior to the bilinear control strategy in terms of higher control effectiveness and efficiency.  相似文献   

18.
An n degree-of-freedom Hamiltonian system with r(1<r<n) independent first integrals which are in involution is called partially integrable Hamiltonian system and a partially integrable Hamiltonian system subject to light dampings and weak stochastic excitations is called quasi partially integrable Hamiltonian system. In the present paper, the averaged Itô and Fokker-Planck-Kolmogorov (FPK) equations for quasi partially integrable Hamiltonian systems in both cases of non-resonance and resonance are derived. It is shown that the number of averaged Itô equations and the dimension of the averaged FPK equation of a quasi partially integrable Hamiltonian system is equal to the number of independent first integrals in involution plus the number of resonant relations of the associated Hamiltonian system. The technique to obtain the exact stationary solution of the averaged FPK equation is presented. The largest Lyapunov exponent of the averaged system is formulated, based on which the stochastic stability and bifurcation of original quasi partially integrable Hamiltonian systems can be determined. Examples are given to illustrate the applications of the proposed stochastic averaging method for quasi partially integrable Hamiltonian systems in response prediction and stability decision and the results are verified by using digital simulation.  相似文献   

19.
The first passage failure of quasi non-integrable generalized Hamiltonian systems is studied. First, the generalized Hamiltonian systems are reviewed briefly. Then, the stochastic averaging method for quasi non-integrable generalized Hamiltonian systems is applied to obtain averaged Itô stochastic differential equations, from which the backward Kolmogorov equation governing the conditional reliability function and the Pontryagin equation governing the conditional mean of the first passage time are established. The conditional reliability function and the conditional mean of first passage time are obtained by solving these equations together with suitable initial condition and boundary conditions. Finally, an example of power system under Gaussian white noise excitation is worked out in detail and the analytical results are confirmed by using Monte Carlo simulation of original system.  相似文献   

20.
The approximate transient response of quasi integrable Hamiltonian systems under Gaussian white noise excitations is investigated. First, the averaged Ito equations for independent motion integrals and the associated Fokker-Planck-Kolmogorov (FPK) equation governing the transient probability density of independent motion integrals of the system are derived by applying the stochastic averaging method for quasi integrable Hamiltonian systems. Then, approximate solution of the transient probability density of independent motion integrals is obtained by applying the Galerkin method to solve the FPK equation. The approximate transient solution is expressed as a series in terms of properly selected base functions with time-dependent coefficients. The transient probability densities of displacements and velocities can be derived from that of independent motion integrals. Three examples are given to illustrate the application of the proposed procedure. It is shown that the results for the three examples obtained by using the proposed procedure agree well with those from Monte Carlo simulation of the original systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号