首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Optimal control theory is employed to derive explicitly the optimal (profit‐maximizing) price of a durable new product over time. The sales rate dynamics depends on the product price and on the unsold portion of the market. Specifically, the hazard rate (i.e. the probability of a purchase by a new customer) increases as the price decreases in a linear fashion. It is shown that both the price and sales rate decline over time for finite horizon problems with or without discounting. In the discounted infinite horizon case, the price remains constant over time. When there is no discounting in the infinite horizon case, there does not exist an optimal solution. However, it is possible in this case to attain a profit level which is arbitrarily close to the theoretical, albeit unattainable, maximum level profit. Economic interpretations are provided for the various quantities that arise in the course of solving the optimal pricing problem. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

2.
In solving optimal control problems, the conventional dynamic programming method often requires interpolations to determine the optimal control law. As a consequence, interpolation errors often degenerate the accuracy of the conventional dynamic programming method. In view of this problem, this paper introduces an inverse dynamics-based dynamic programming method to eliminate the interpolation requirement for systems with linear dynamics. Simulation results show that the proposed approach provides more reliable solutions than the conventional dynamic programming method. Copyright © 1998 John Wiley & Sons, Ltd.  相似文献   

3.
A mobile electronic device needs to periodically connect to a stationary receiver, but the information to transfer is minimal. One such example is the electronic bracelet used in house arrest, where the main purpose is to inform the receiver that the person is in the house. Because the mobile device does not know its current distance from the receiver, it has incentive to first send a low‐strength signal to conserve its battery energy. If the low‐strength signal fails to reach the receiver, the mobile device then gradually increases its signal strength until a successful connection occurs. By formulating the problem as a dynamic program, we characterize the structure of the optimal probing policy and develop an algorithm to compute it. We also consider a discrete approximation that can be easily implemented in practice. Numerical examples show promising improvement of the derived policy over naive heuristic policies, and that the derived policy is robust when there are small errors in estimating the distribution of the distance between the mobile device and the receiver. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, optimal control for a linear system with quadratic performance is obtained using genetic programming (GP). The goal is to find the optimal control with reduced calculus effort using non‐traditional methods. The obtained GP solution is compared with the traditional Runge–Kutta method. To obtain optimal control, the solution of matrix Riccati differential equation is computed based on grammatical evolution. The accuracy of the solution of the GP approach to the problem is qualitatively better than traditional methods. An illustrative numerical example is presented for the proposed method. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
The objective of this study is to apply the differential dynamic programming (DDP) technique of optimal control to heating, ventilating and air-conditioning (HVAC) systems and to compare its performance with a non-linear programming (NLP) technique using the sequential quadratic programming method. The DDP technique is briefly described and studied. Limitations of the technique are noted. Three cases of a system that has been treated previously in the literature are optimized by the two techniques and the computational times compared. The study shows DDP to be efficient compared with NLP for the example problems. NLP is, however, more robust and general and can treat constraints on the state variables directly. Further investigation is needed for larger-scale problems to fully explore the features of the two methods.  相似文献   

6.
In this study, we investigate the optimal control of a class of singularly perturbed linear stochastic systems with Markovian jumping parameters. After establishing an asymptotic structure for the stabilizing solution of the coupled stochastic algebraic Riccati equations, a parameter‐independent composite controller is derived. Furthermore, the cost degradation in a reduced‐order controller is discussed. Thus, the exactness of the proposed approximate control is discussed for the first time. As an additional important contribution, a numerical algorithm for solving the coupled stochastic algebraic Riccati equations is proposed, and the feature of the resulting higher‐order controller is shown. Finally, a simple example is presented to demonstrate the validity of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
This paper describes the ANSI C/C++ computer program dsoa , which implements an algorithm for the approximate solution of dynamics system optimization problems. The algorithm is a direct method that can be applied to the optimization of dynamic systems described by index‐1 differential‐algebraic equations (DAEs). The types of problems considered include optimal control problems and parameter identification problems. The numerical techniques are employed to transform the dynamic system optimization problem into a parameter optimization problem by: (i) parameterizing the control input as piecewise constant on a fixed mesh, and (ii) approximating the DAEs using a linearly implicit Runge‐Kutta method. The resultant nonlinear programming (NLP) problem is solved via a sequential quadratic programming technique. The program dsoa is evaluated using 83 nontrivial optimal control problems that have appeared in the literature. Here we compare the performance of the algorithm using two different NLP problem solvers, and two techniques for computing the derivatives of the functions that define the problem. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, an optimal utilization model of a congested transport system with auto/transit parallel modes is formulated using optimal control theory. The model aims at maximizing the net economic benefit over the whole study horizon of time. It is shown that at equilibrium, the mode choice at aggregate demand level is governed by a multinomial exponential function, while for each mode, the generalized costs for all departure times that are actually used are identical. The generalized costs include the optimal variable fares and tolls imposed on transit mode and auto mode commuters, respectively; this transport pricing supports the system optimum as a user equilibrium. An iterative discrete time algorithm using the augmented Lagrangian method is proposed and illustrated with a numerical example. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we consider a production system consisting of one machine for which maintenance is performed on a periodic basis. When the machine is undergoing maintenance, the system is shut down and cannot produce. One part-type is produced and the demand rate is assumed to be constant. In order to make on-time delivery, the objective is to produce following the demand as closely as possible. However, the maintenance disruptions make the production deviate from the demand. We formulate the production flow control problem as an optimal control model and use Pontryagin's minimum principle to solve the special case of one up-down cycle. We then solve the general N-cycle problem based on the one-cycle solution.  相似文献   

10.
In this study, we use generalized policy iteration approximate dynamic programming (ADP) algorithm to design an optimal controller for a class of discrete‐time systems with actuator saturation. A integral function is proposed to manage the saturation nonlinearity in actuators and then the generalized policy iteration ADP algorithm is developed to deal with the optimal control problem. Compared with other algorithm, the developed ADP algorithm includes 2 iteration procedures. In the present control scheme, 2 neural networks are introduced to approximate the control law and performance index function. Furthermore, numerical simulations illustrate the convergence and feasibility of the developed method.  相似文献   

11.
In this paper, an event‐triggered heuristic dynamic programming algorithm for discrete‐time nonlinear systems with a novel triggering condition is studied. Different from traditional heuristic dynamic programming algorithms, the control law in this algorithm will only be updated when the triggering condition is satisfied to reduce the computational burden. Three neural networks are employed, which are model network, action network, and critic network. Model functions, control laws, and value functions are estimated using neural networks, respectively. The main contribution of this algorithm is the novel triggering condition with simpler form and fewer assumptions. Additionally, a proof of the stability for discrete‐time systems using Lyapunov technique is given. Finally, two simulations are shown to verify the effectiveness of the developed algorithm.  相似文献   

12.
We describe a method for obtaining the optimal feedback solution to a constrained discrete time linear-quadratic optimal control problem. The dimensions of the state and control vectors are one and the stochasticity is represented by a finite number of possible outcomes at each stage. The method is based on dynamic programming and exploits that the optimal value is piecewise quadratic and that the optimal feedback solution is piecewise linear. Also the special case with linear criterion function is considered. Approximation methods which allow a trade-off between computation times and solution accuracy are discussed. The method is applied to two cases, viz., hydropower scheduling and temperature control of a greenhouse. Comparative studies on these two cases were made with a mathematical programming formulation solved by a standard code. On the cases studied the new method was found to be around 100 times faster and to display a better solution stability. © 1997 John Wiley & Sons, Ltd.  相似文献   

13.
This paper presents optimal patterns of glider dynamic soaring utilizing wind gradients. A set of three‐dimensional point‐mass equations of motion is used and basic glider performance parameters are identified through normalizations of these equations. In particular, a single parameter is defined that represents the combined effects of air density, glider wing loading, and wind gradient slope. Glider dynamic soaring flights are formulated as non‐linear optimal control problems and three performance indices are considered. In the first formulation, the completion time of one cycle of dynamic soaring is minimized subject to glider equations of motion, limitations on glider flights, and appropriate terminal constraints that enforce a periodic dynamic soaring flight. In the second formulation, the final altitude after one cycle of dynamic soaring is maximized subject to similar constraints. In the third formulation, the least required wind gradient slope that can sustain an energy‐neutral dynamic soaring flight is determined. Different terminal constraints are used to produce basic, travelling, and loiter dynamic soaring patterns. These optimal control problems are converted into parameter optimization via a collocation approach and solved numerically with the software NPSOL. Different patterns of glider dynamic soaring are compared in terms of cycle completion time and altitude‐increasing capability. Effects of wind gradient slope and wind profile non‐linearity on dynamic soaring patterns are examined. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
The use of iterative dynamic programming employing systematic region contraction and accessible grid points is investigated for the optimal control of time-delay systems. At the time of generating the grid points for the state variables, the corresponding delayed variables at each time stage are also generated and stored in memory. Then, when applying dynamic programming, a linear approximation is used to obtain the initial profile for the delayed variables during integration. This procedure was tested with four problems of different complexity. In each case the optimal control policy is easily obtained and the results compare very favourably with those reported in the literature using other computational procedures.  相似文献   

15.
The finite time horizon singular linear quadratic (LQ) optimal control problem is investigated for singular stochastic discrete‐time systems. The problem is transformed into positive LQ one for standard stochastic systems via two equivalent transformations. It is proved that the singular LQ optimal control problem is solvable under two reasonable rank conditions. Via dynamic programming principle, the desired optimal controller is presented in terms of matrix iterative form. One simulation is provided to show the effectiveness of the proposed approaches. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
This paper deals with the optimal control of grid-connected Battery Energy Storage Systems (BESSs) operating for energy arbitrage. An important issue is that BESSs degrade over time, according to their use, and thus they are usable only for a limited number of cycles. Therefore, the time horizon of the optimization depends on the actual operation of the BESS. We focus on Li-ion batteries and use an empirical model to describe battery degradation. The BESS model includes an equivalent circuit for the battery and a simplified model for the power converter. In order to model the energy price variations, we use a linear stochastic model that includes the effect of the time-of-the-day. The problem of maximizing the revenues obtained over the BESS lifetime is formulated as a stochastic optimal control problem with a long, operation-dependent time horizon. First, we divide this problem into a finite set of subproblems, such that for each one of them, the State of Health (SoH) of the battery is approximately constant. Next, we reformulate approximately every subproblem into the minimization of the ratio of two long-time average-cost criteria and use a value-iteration-type algorithm to derive the optimal policy. Finally, we present some numerical results and investigate the effects of the energy loss parameters, degradation parameters, and price dynamics on the optimal policy.  相似文献   

17.
A new and systematic approach to the problem of minimum effort ripple‐free dead‐beat (EFRFDB) control of the step response of a linear servomechanism is presented. There is specified a set of admissible discrete error feedback controllers, complying with general conditions for the design of ripple‐free dead‐beat (RFDB) controllers, regardless of the introduced degree of freedom, defined as the number of steps exceeding their minimum number. The solution is unique for the minimum number of steps, while their increase enables one to make an optimal choice from a competitive set of controllers via their parametrization in a finite‐dimensional space. As an objective function, Chebyshev's norm of an arbitrarily chosen linear projection of the control variable was chosen. There has been elaborated a new, efficient algorithm for all stable systems of the given class with an arbitrary degree of freedom. A parametrized solution in a finite space of polynomials is obtained through the solution of a standard problem of mathematical programming which simultaneously yields the solution of a total position change maximization of servomechanism provided that a required number of steps and control effort limitation are given. A problem formulated in this way is consecutively used in solving the time‐optimal (minimum‐step) control of a servomechanism to a given steady‐state position with a specified limitation on control effort. The effect of EFRFDB control on the example of a linear servomechanism with torsion spring shaft, with the criterions of control effort and control difference effort, is illustrated and analysed. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper, we consider the linear‐quadratic control problem with an inequality constraint on the control variable. We derive the feedback form of the optimal control by the agency of the unconstrained linear‐quadratic control systems. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we apply two optimization methods to solve an optimal control problem of a linear neutral differential equation (NDE) arising in economics. The first one is a variational method, and the second follows a dynamic programming approach. Because of the infinite dimensionality of the NDE, the second method requires the reformulation of the latter as an ordinary differential equation in an appropriate abstract space. It is shown that the resulting Hamilton–Jacobi–Bellman equation admits a closed‐form solution, allowing for a much finer characterization of the optimal dynamics compared with the alternative variational method. The latter is clearly limited by the nontrivial nature of asymptotic analysis of NDEs. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
This communication presents a spectral method for solving time-varying linear quadratic optimal control problems. Legendre–Gauss–Lobatto nodes are used to construct the mth-degree polynomial approximation of the state and control variables. The derivative x ·(t) of the state vector x (t) is approximaed by the analytic derivative of the corresponding interpolating polynomial. The performance index approximation is based on Gauss–Lobatto integration. The optimal control problem is then transformed into a linear programming problem. The proposed technique is easy to implement, efficient and yields accurate results. Numerical examples are included and a comparison is made with an existing result.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号