首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
It is important to design robust and reliable systems by accounting for uncertainty and variability in the design process. However, performing optimization in this setting can be computationally expensive, requiring many evaluations of the numerical model to compute statistics of the system performance at every optimization iteration. This paper proposes a multifidelity approach to optimization under uncertainty that makes use of inexpensive, low‐fidelity models to provide approximate information about the expensive, high‐fidelity model. The multifidelity estimator is developed based on the control variate method to reduce the computational cost of achieving a specified mean square error in the statistic estimate. The method optimally allocates the computational load between the two models based on their relative evaluation cost and the strength of the correlation between them. This paper also develops an information reuse estimator that exploits the autocorrelation structure of the high‐fidelity model in the design space to reduce the cost of repeatedly estimating statistics during the course of optimization. Finally, a combined estimator incorporates the features of both the multifidelity estimator and the information reuse estimator. The methods demonstrate 90% computational savings in an acoustic horn robust optimization example and practical design turnaround time in a robust wing optimization problem. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Design optimization is a computationally expensive process as it requires the assessment of numerous designs and each of such assessments may be based on expensive analyses (e.g. computational fluid dynamics method or finite element based method). One way to contain the computational time within affordable limits is to use computationally cheaper approximations (surrogates) in lieu of the actual analyses during the course of optimization. This article introduces a framework for design optimization using surrogates. The framework is built upon a stochastic, zero-order, population-based optimization algorithm, which is embedded with a modified elitism scheme, to ensure convergence in the actual function space. The accuracy of the surrogate model is maintained via periodic retraining and the number of data points required to create the surrogate model is identified by a k-means clustering algorithm. A comparison is provided between different surrogate models (Kriging, radial basis functions (Exact and Fixed) and Cokriging) using a number of mathematical test functions and engineering design optimization problems. The results clearly indicate that for a given fixed number of actual function evaluations, the surrogate assisted optimization model consistently performs better than a pure optimization model using actual function evaluations.  相似文献   

4.
Evolutionary algorithms cannot effectively handle computationally expensive problems because of the unaffordable computational cost brought by a large number of fitness evaluations. Therefore, surrogates are widely used to assist evolutionary algorithms in solving these problems. This article proposes an improved surrogate-assisted particle swarm optimization (ISAPSO) algorithm, in which a hybrid particle swarm optimization (PSO) is combined with global and local surrogates. The global surrogate is not only used to predict fitness values for reducing computational burden but also regarded as a global searcher to speed up the global search process of PSO by using an efficient global optimization algorithm, while the local one is constructed for a local search in the neighbourhood of the current optimal solution by finding the predicted optimal solution of the local surrogate. Empirical studies on 10 widely used benchmark problems and a real-world structural design optimization problem of a driving axle show that the ISAPSO algorithm is effective and highly competitive.  相似文献   

5.
In optimization under uncertainty for engineering design, the behavior of the system outputs due to uncertain inputs needs to be quantified at each optimization iteration, but this can be computationally expensive. Multifidelity techniques can significantly reduce the computational cost of Monte Carlo sampling methods for quantifying the effect of uncertain inputs, but existing multifidelity techniques in this context apply only to Monte Carlo estimators that can be expressed as a sample average, such as estimators of statistical moments. Information reuse is a particular multifidelity method that treats previous optimization iterations as lower fidelity models. This work generalizes information reuse to be applicable to quantities whose estimators are not sample averages. The extension makes use of bootstrapping to estimate the error of estimators and the covariance between estimators at different fidelities. Specifically, the horsetail matching metric and quantile function are considered as quantities whose estimators are not sample averages. In an optimization under uncertainty for an acoustic horn design problem, generalized information reuse demonstrated computational savings of over 60% compared with regular Monte Carlo sampling.  相似文献   

6.
The optimization of subsurface flow processes is important for many applications, including oil field operations and the geological storage of carbon dioxide. These optimizations are very demanding computationally due to the large number of flow simulations that must be performed and the typically large dimension of the simulation models. In this work, reduced‐order modeling (ROM) techniques are applied to reduce the simulation time of complex large‐scale subsurface flow models. The procedures all entail proper orthogonal decomposition (POD), in which a high‐fidelity training simulation is run, solution snapshots are stored, and an eigen‐decomposition (SVD) is performed on the resulting data matrix. Additional recently developed ROM techniques are also implemented, including a snapshot clustering procedure and a missing point estimation technique to eliminate rows from the POD basis matrix. The implementation of the ROM procedures into a general‐purpose research simulator is described. Extensive flow simulations involving water injection into a geologically complex 3D oil reservoir model containing 60 000 grid blocks are presented. The various ROM techniques are assessed in terms of their ability to reproduce high‐fidelity simulation results for different well schedules and also in terms of the computational speedups they provide. The numerical solutions demonstrate that the ROM procedures can accurately reproduce the reference simulations and can provide speedups of up to an order of magnitude when compared with a high‐fidelity model simulated using an optimized solver. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
This paper presents an efficient metamodel building technique for solving collaborative optimization (CO) based on high fidelity models. The proposed method is based on a metamodeling concept, that is designed to simultaneously utilize computationally efficient (low fidelity) and expensive (high fidelity) models in an optimization process. A distinctive feature of the method is the utilization of interaction between low and high fidelity models in the construction of high quality metamodels both at the discipline level and system level of the CO. The low fidelity model is tuned in such a way that it approaches the same level of accuracy as the high fidelity model; but at the same time remains computational inexpensive. In this process, the tuned low fidelity models are used in the discipline level optimization process. In the system level, to handle the computational cost of the equality constraints in CO, model management strategy along with metamodeling technique are used. To determine the fidelity of metamodels, the predictive estimation of model fidelity method is applied. The developed method is demonstrated on a 2D Airfoil design problem, involving tightly coupled high fidelity structural and aerodynamic models. The results obtained show that the proposed method significantly reduces computational cost, and improves the convergence rate for solving the multidisciplinary optimization problem based on high fidelity models.  相似文献   

8.
A number of multi-objective evolutionary algorithms have been proposed in recent years and many of them have been used to solve engineering design optimization problems. However, designs need to be robust for real-life implementation, i.e. performance should not degrade substantially under expected variations in the variable values or operating conditions. Solutions of constrained robust design optimization problems should not be too close to the constraint boundaries so that they remain feasible under expected variations. A robust design optimization problem is far more computationally expensive than a design optimization problem as neighbourhood assessments of every solution are required to compute the performance variance and to ensure neighbourhood feasibility. A framework for robust design optimization using a surrogate model for neighbourhood assessments is introduced in this article. The robust design optimization problem is modelled as a multi-objective optimization problem with the aim of simultaneously maximizing performance and minimizing performance variance. A modified constraint-handling scheme is implemented to deal with neighbourhood feasibility. A radial basis function (RBF) network is used as a surrogate model and the accuracy of this model is maintained via periodic retraining. In addition to using surrogates to reduce computational time, the algorithm has been implemented on multiple processors using a master–slave topology. The preliminary results of two constrained robust design optimization problems indicate that substantial savings in the actual number of function evaluations are possible while maintaining an acceptable level of solution quality.  相似文献   

9.
Computer simulation models are ubiquitous in modern engineering design. In many cases, they are the only way to evaluate a given design with sufficient fidelity. Unfortunately, an added computational expense is associated with higher fidelity models. Moreover, the systems being considered are often highly nonlinear and may feature a large number of designable parameters. Therefore, it may be impractical to solve the design problem with conventional optimization algorithms. A promising approach to alleviate these difficulties is surrogate-based optimization (SBO). Among proven SBO techniques, the methods utilizing surrogates constructed from corrected physics-based low-fidelity models are, in many cases, the most efficient. This article reviews a particular technique of this type, namely, shape-preserving response prediction (SPRP), which works on the level of the model responses to correct the underlying low-fidelity models. The formulation and limitations of SPRP are discussed. Applications to several engineering design problems are provided.  相似文献   

10.
We consider the problem of optimal design of nano‐scale heat conducting systems using topology optimization techniques. At such small scales the empirical Fourier's law of heat conduction no longer captures the underlying physical phenomena because the mean‐free path of the heat carriers, phonons in our case, becomes comparable with, or even larger than, the feature sizes of considered material distributions. A more accurate model at nano‐scales is given by kinetic theory, which provides a compromise between the inaccurate Fourier's law and precise, but too computationally expensive, atomistic simulations. We analyze the resulting optimal control problem in a continuous setting, briefly describing the computational approach to the problem based on discontinuous Galerkin methods, algebraic multigrid preconditioned generalized minimal residual method, and a gradient‐based mathematical programming algorithm. Numerical experiments with our implementation of the proposed numerical scheme are reported. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
This paper presents a multi-agent search technique to design an optimal composite box-beam helicopter rotor blade. The search technique is called particle swarm optimization (‘inspired by the choreography of a bird flock’). The continuous geometry parameters (cross-sectional dimensions) and discrete ply angles of the box-beams are considered as design variables. The objective of the design problem is to achieve (a) specified stiffness value and (b) maximum elastic coupling. The presence of maximum elastic coupling in the composite box-beam increases the aero-elastic stability of the helicopter rotor blade. The multi-objective design problem is formulated as a combinatorial optimization problem and solved collectively using particle swarm optimization technique. The optimal geometry and ply angles are obtained for a composite box-beam design with ply angle discretizations of 10°, 15° and 45°. The performance and computational efficiency of the proposed particle swarm optimization approach is compared with various genetic algorithm based design approaches. The simulation results clearly show that the particle swarm optimization algorithm provides better solutions in terms of performance and computational time than the genetic algorithm based approaches.  相似文献   

12.
Generally, in designing nonlinear energy sink (NES), only uncertainties in the ground motion parameters are considered and the unconditional expected mean of the performance metric is minimized. However, such an approach has two major limitations. First, ignoring the uncertainties in the system parameters can result in an inefficient design of the NES. Second, only minimizing the unconditional mean of the performance metric may result in large variance of the response because of the uncertainties in the system parameters. To address these issues, we focus on robust design optimization (RDO) of NES under uncertain system and hazard parameters. The RDO is solved as a bi-objective optimization problem where the mean and the standard deviation of the performance metric are simultaneously minimized. This bi-objective optimization problem has been converted into a single objective problem by using the weighted sum method. However, solving an RDO problem can be computationally expensive. We thus used a novel machine learning technique, referred to as the hybrid polynomial correlated function expansion (H-PCFE), for solving the RDO problem in an efficient manner. Moreover, we adopt an adaptive framework where H-PCFE models trained at previous iterations are reused and hence, the computational cost is less. We illustrate that H-PCFE is computationally efficient and accurate as compared to other similar methods available in the literature. A numerical study showcasing the importance of incorporating the uncertain system parameters into the optimization procedure is shown. Using the same example, we also illustrate the importance of solving an RDO problem for NES design. Overall, considering the uncertainties in the parameters have resulted in a more efficient design. Determining NES parameters by solving an RDO problem results in a less sensitive design.  相似文献   

13.
Topology optimization of large scale structures is computationally expensive, notably because of the cost of solving the equilibrium equations at each iteration. Reduced order models by projection, also known as reduced basis models, have been proposed in the past for alleviating this cost. We propose here a new method for coupling reduced basis models with topology optimization to improve the efficiency of topology optimization of large scale structures. The novel approach is based on constructing the reduced basis on the fly, using previously calculated solutions of the equilibrium equations. The reduced basis is thus adaptively constructed and enriched, based on the convergence behavior of the topology optimization. A direct approach and an approach with adjusted sensitivities are described, and their algorithms provided. The approaches are tested and compared on various 2D and 3D minimum compliance topology optimization benchmark problems. Computational cost savings by up to a factor of 12 are demonstrated using the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
Structural robust optimization problems are often solved via the so‐called Bi‐level approach. This solution procedure often involves large computational efforts and sometimes its convergence properties are not so good because of the non‐smooth nature of the Bi‐level formulation. Another problem associated with the traditional Bi‐level approach is that the confidence of the robustness of the obtained solutions cannot be fully assured at least theoretically. In the present paper, confidence single‐level non‐linear semidefinite programming (NLSDP) formulations for structural robust optimization problems under stiffness uncertainties are proposed. This is achieved by using some tools such as Sprocedure and quadratic embedding for convex analysis. The resulted NLSDP problems are solved using the modified augmented Lagrange multiplier method which has sound mathematical properties. Numerical examples show that confidence robust optimal solutions can be obtained with the proposed approach effectively. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Fluid–structure interactions (FSI) play a crucial role in many engineering fields. However, the computational cost associated with high‐fidelity aeroelastic models currently precludes their direct use in industry, especially for strong interactions. The strongly coupled segregated problem—that results from domain partitioning—can be interpreted as an optimization problem of a fluid–structure interface residual. Multi‐fidelity optimization techniques can therefore directly be applied to this problem in order to obtain the solution efficiently. In previous work, it is already shown that aggressive space mapping (ASM) can be used in this context. In this contribution, we extend the research towards the use of space mapping for FSI simulations. We investigate the performance of two other approaches, generalized space mapping and output space mapping, by application to both compressible and incompressible 2D problems. Moreover, an analysis of the influence of the applied low‐fidelity model on the achievable speedup is presented. The results indicate that output space mapping is a viable alternative to ASM when applied in the context of solver coupling for partitioned FSI, showing similar performance as ASM and resulting in reductions in computational cost up to 50% with respect to the reference quasi‐Newton method. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
A non‐gradient‐based approach for topology optimization using a genetic algorithm is proposed in this paper. The genetic algorithm used in this paper is assisted by the Kriging surrogate model to reduce computational cost required for function evaluation. To validate the non‐gradient‐based topology optimization method in flow problems, this research focuses on two single‐objective optimization problems, where the objective functions are to minimize pressure loss and to maximize heat transfer of flow channels, and one multi‐objective optimization problem, which combines earlier two single‐objective optimization problems. The shape of flow channels is represented by the level set function. The pressure loss and the heat transfer performance of the channels are evaluated by the Building‐Cube Method code, which is a Cartesian‐mesh CFD solver. The proposed method resulted in an agreement with previous study in the single‐objective problems in its topology and achieved global exploration of non‐dominated solutions in the multi‐objective problems. © 2016 The Authors International Journal for Numerical Methods in Engineering Published by John Wiley & Sons Ltd  相似文献   

17.
A fast, flexible, and robust simulation-based optimization scheme using an ANN-surrogate model was developed, implemented, and validated. The optimization method uses Genetic Algorithm (GA), which is coupled with an Artificial Neural Network (ANN) that uses a back propagation algorithm. The developed optimization scheme was successfully applied to single-point aerodynamic optimization of a transonic turbine stator and multi-point optimization of a NACA65 subsonic compressor rotor in two-dimensional flow, both were represented by 2D linear cascades. High fidelity CFD flow simulations, which solve the Reynolds-Averaged Navier-Stokes equations, were used in generating the data base used in building the ANN low fidelity model. The optimization objective is a weighted sum of the performance objectives and is penalized with the constraints; it was constructed so as to achieve a better aerodynamic performance at the design point or over the full operating range by reshaping the blade profile. The latter is represented using NURBS functions, whose coefficients are used as the design variables. Parallelizing the CFD flow simulations reduced the turn-around computation time at close to 100% efficiency. The ANN model was able to approximate the objective function rather accurately and to reduce the optimization computing time by ten folds. The chosen objective function and optimization methodology result in a significant and consistent improvement in blade performance.  相似文献   

18.
Response surface methods use least-squares regression analysis to fit low-order polynomials to a set of experimental data. It is becoming increasingly more popular to apply response surface approximations for the purpose of engineering design optimization based on computer simulations. However, the substantial expense involved in obtaining enough data to build quadratic response approximations seriously limits the practical size of problems. Multifidelity techniques, which combine cheap low-fidelity analyses with more accurate but expensive high-fidelity solutions, offer means by which the prohibitive computational cost can be reduced. Two optimum design problems are considered, both pertaining to the fluid flow in diffusers. In both cases, the high-fidelity analyses consist of solutions to the full Navier-Stokes equations, whereas the low-fidelity analyses are either simple empirical formulas or flow solutions to the Navier-Stokes equations achieved using coarse computational meshes. The multifidelity strategy includes the construction of two separate response surfaces: a quadratic approximation based on the low-fidelity data, and a linear correction response surface that approximates the ratio of high-and low-fidelity function evaluations. The paper demonstrates that this approach may yield major computational savings.  相似文献   

19.
Kai Long  Xuan Wang  Xianguang Gu 《工程优选》2018,50(12):2091-2107
Transient heat conduction analysis involves extensive computational cost. It becomes more serious for multi-material topology optimization, in which many design variables are involved and hundreds of iterations are usually required for convergence. This article aims to provide an efficient quadratic approximation for multi-material topology optimization of transient heat conduction problems. Reciprocal-type variables, instead of relative densities, are introduced as design variables. The sequential quadratic programming approach with explicit Hessians can be utilized as the optimizer for the computationally demanding optimization problem, by setting up a sequence of quadratic programs, in which the thermal compliance and weight can be explicitly approximated by the first and second order Taylor series expansion in terms of design variables. Numerical examples show clearly that the present approach can achieve better performance in terms of computational efficiency and iteration number than the solid isotropic material with penalization method solved by the commonly used method of moving asymptotes. In addition, a more lightweight design can be achieved by using multi-phase materials for the transient heat conductive problem, which demonstrates the necessity for multi-material topology optimization.  相似文献   

20.
Markov chain Monte Carlo approaches have been widely used for Bayesian inference. The drawback of these methods is that they can be computationally prohibitive especially when complex models are analyzed. In such cases, variational methods may provide an efficient and attractive alternative. However, the variational methods reported to date are applicable to relatively simple models and most are based on a factorized approximation to the posterior distribution. Here, we propose a variational approach that is capable of handling models that consist of a system of differential-algebraic equations and whose posterior approximation can be represented by a multivariate distribution. Under the proposed approach, the solution of the variational inference problem is decomposed into three steps: a maximum a posteriori optimization, which is facilitated by using an orthogonal collocation approach, a preprocessing step, which is based on the estimation of the eigenvectors of the posterior covariance matrix, and an expected propagation optimization problem. To tackle multivariate integration, we employ quadratures derived from the Smolyak rule (sparse grids). Examples are reported to elucidate the advantages and limitations of the proposed methodology. The results are compared to the solutions obtained from a Markov chain Monte Carlo approach. It is demonstrated that significant computational savings can be gained using the proposed approach. This article has supplementary material online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号