首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In deterministic computer experiments, it is often known that the output is a monotonic function of some of the inputs. In these cases, a monotonic metamodel will tend to give more accurate and interpretable predictions with less prediction uncertainty than a nonmonotonic metamodel. The widely used Gaussian process (GP) models are not monotonic. A recent article in Biometrika offers a modification that projects GP sample paths onto the cone of monotonic functions. However, their approach does not account for the fact that the GP model is more informative about the true function at locations near design points than at locations far away. Moreover, a grid-based method is used, which is memory intensive and gives predictions only at grid points. This article proposes the weighted projection approach that more effectively uses information in the GP model together with two computational implementations. The first is isotonic regression on a grid while the second is projection onto a cone of monotone splines, which alleviates problems faced by a grid-based approach. Simulations show that the monotone B-spline metamodel gives particularly good results. Supplementary materials for this article are available online.  相似文献   

2.
The calibration of computer models using physical experimental data has received a compelling interest in the last decade. Recently, multiple works have addressed the functional calibration of computer models, where the calibration parameters are functions of the observable inputs rather than taking a set of fixed values as traditionally treated in the literature. While much of the recent work on functional calibration was focused on estimation, the issue of sequential design for functional calibration still presents itself as an open question. Addressing the sequential design issue is thus the focus of this article. We investigate different sequential design approaches and show that the simple separate design approach has its merit in practical use when designing for functional calibration. Analysis is carried out on multiple simulated and real-world examples.  相似文献   

3.
Sequential experiments composed of initial experiments and follow-up experiments are widely adopted for economical computer emulations. Many kinds of Latin hypercube designs with good space-filling properties have been proposed for designing the initial computer experiments. However, little work based on Latin hypercubes has focused on the design of the follow-up experiments. Although some constructions of nested Latin hypercube designs can be adapted to sequential designs, the size of the follow-up experiments needs to be a multiple of that of the initial experiments. In this article, a general method for constructing sequential designs of flexible size is proposed, which allows the combined designs to have good one-dimensional space-filling properties. Moreover, the sampling properties and a type of central limit theorem are derived for these designs. Several improvements of these designs are made to achieve better space-filling properties. Simulations are carried out to verify the theoretical results. Supplementary materials for this article are available online.  相似文献   

4.
Multivariate polynomials are increasingly being used to construct emulators of computer models for uncertainty quantification. For deterministic computer codes, interpolating polynomial metamodels should be used instead of noninterpolating ones for logical consistency and prediction accuracy. However, available methods for constructing interpolating polynomials only provide point predictions. There is no known method that can provide probabilistic statements about the interpolation error. Furthermore, there are few alternatives to grid designs and sparse grids for constructing multivariate interpolating polynomials. A significant disadvantage of these designs is the large gaps between allowable design sizes. This article proposes a stochastic interpolating polynomial (SIP) that seeks to overcome the problems discussed above. A Bayesian approach in which interpolation uncertainty is quantified probabilistically through the posterior distribution of the output is employed. This allows assessment of the effect of interpolation uncertainty on estimation of quantities of interest based on the metamodel. A class of transformed space-filling design and a sequential design approach are proposed to efficiently construct the SIP with any desired number of runs. Simulations demonstrate that the SIP can outperform Gaussian process (GP) emulators. This article has supplementary material online.  相似文献   

5.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi-fidelity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. From simulation results and a real example using finite element analysis, our method outperforms the expected improvement (EI) criterion that works for single-accuracy experiments. Supplementary materials for this article are available online.  相似文献   

6.
《技术计量学》2013,55(4):527-541
Computer simulation often is used to study complex physical and engineering processes. Although a computer simulator often can be viewed as an inexpensive way to gain insight into a system, it still can be computationally costly. Much of the recent work on the design and analysis of computer experiments has focused on scenarios where the goal is to fit a response surface or process optimization. In this article we develop a sequential methodology for estimating a contour from a complex computer code. The approach uses a stochastic process model as a surrogate for the computer simulator. The surrogate model and associated uncertainty are key components in a new criterion used to identify the computer trials aimed specifically at improving the contour estimate. The proposed approach is applied to exploration of a contour for a network queuing system. Issues related to practical implementation of the proposed approach also are addressed.  相似文献   

7.
Computer experiments are used frequently for the study and improvement of a process under study. Optimizing such process based on a computer model is costly, so an approximation of the computer model, or metamodel, is used. Efficient global optimization (EGO) is a sequential optimization method for computer experiments based on a Gaussian process model approximation to the computer model response. A long‐standing problem in EGO is that it does not consider the uncertainty in the parameter estimates of the Gaussian process. Treating these estimates as if they are the true parameters leads to an improper assessment of the precision of the approximation, a precision that is crucial to assess not only in optimization but in metamodeling in general. One way to account for these uncertainties is to use bootstrapping, studied by previous authors. Alternatively, some other authors have mentioned how a Bayesian approach may be the best way to incorporate the parameter uncertainty in the optimization, but no fully Bayesian approach for EGO has been implemented in practice. In this paper, we present a fully Bayesian implementation of the EGO method. The proposed Bayesian EGO algorithm is validated through simulation of noisy nonlinear functions and compared with the standard EGO method and the bootstrapped EGO. We also apply the Bayesian EGO algorithm to the optimization of a stochastic computer model. It is shown how a Bayesian approach to EGO allows one to optimize any function of the posterior predictive density. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
The construction of decision-theoretical Bayesian designs for realistically complex nonlinear models is computationally challenging, as it requires the optimization of analytically intractable expected utility functions over high-dimensional design spaces. We provide the most general solution to date for this problem through a novel approximate coordinate exchange algorithm. This methodology uses a Gaussian process emulator to approximate the expected utility as a function of a single design coordinate in a series of conditional optimization steps. It has flexibility to address problems for any choice of utility function and for a wide range of statistical models with different numbers of variables, numbers of runs and randomization restrictions. In contrast to existing approaches to Bayesian design, the method can find multi-variable designs in large numbers of runs without resorting to asymptotic approximations to the posterior distribution or expected utility. The methodology is demonstrated on a variety of challenging examples of practical importance, including design for pharmacokinetic models and design for mixed models with discrete data. For many of these models, Bayesian designs are not currently available. Comparisons are made to results from the literature, and to designs obtained from asymptotic approximations. Supplementary materials for this article are available online.  相似文献   

9.
In the past two decades, more and more quality and reliability activities have been moving into the design of product and process. The design and analysis of computer experiments, as a new frontier of the design of experiments, has become increasingly popular among modern companies for optimizing product and process conditions and producing high‐quality yet low‐cost products and processes. This article mainly focuses on the issue of constructing cheap metamodels as alternatives to the expensive computer simulators and proposes a new metamodeling method on the basis of the Gaussian stochastic process model or Gaussian Kriging. Rather than a constant mean as in ordinary Kriging or a fixed mean function as in universal Kriging, the new method captures the overall trend of the performance characteristics of products and processes through a more accurate mean, by efficiently incorporating a scheme of sparseness prior–based Bayesian inference into Kriging. Meanwhile, the mean model is able to adaptively exclude the unimportant effects that deteriorate the prediction performance. The results of an experiment on empirical applications demonstrate that, compared with several benchmark methods in the literature, the proposed Bayesian method is not only much more effective in approximation but also very efficient in implementation, hence more appropriate than the widely used ordinary Kriging to empirical applications in the real world. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
Highly fractionated designs are often used for screening purposes in order to reduce the number of runs. In fractionary-designed experiments, the factor effects are confounded with interactions that are potentially active. Estimating separately the confounded effects is sometimes difficult and the results may be inconclusive as to which effect is real. Foldover designs are usually used to resolve such ambiguities. In this article, a method based on overlapping designs that enables one to separate the confounded effects in a smaller amount of runs is proposed. We illustrate this with an application.  相似文献   

11.
Robust parameter designs are widely used to produce products/processes that perform consistently well across various conditions known as noise factors. Recently, the robust parameter design method is implemented in computer experiments. The structure of conventional product array design becomes unsuitable due to its extensive number of runs and the polynomial modeling. In this article, we propose a new framework robust parameter design via stochastic approximation (RPD-SA) to efficiently optimize the robust parameter design criteria. It can be applied to general robust parameter design problems, but is particularly powerful in the context of computer experiments. It has the following four advantages: (1) fast convergence to the optimal product setting with fewer number of function evaluations; (2) incorporation of high-order effects of both design and noise factors; (3) adaptation to constrained irregular region of operability; (4) no requirement of statistical analysis phase. In the numerical studies, we compare RPD-SA to the Monte Carlo sampling with Newton–Raphson-type optimization. An “Airfoil” example is used to compare the performance of RPD-SA, conventional product array designs, and space-filling designs with the Gaussian process. The studies show that RPD-SA has preferable performance in terms of effectiveness, efficiency and reliability.  相似文献   

12.
Robust parameter design with computer experiments is becoming increasingly important for product design. Existing methodologies for this problem are mostly for finding optimal control factor settings. However, in some cases, the objective of the experimenter may be to understand how the noise and control factors contribute to variation in the response. The functional analysis of variance (ANOVA) and variance decompositions of the response, in addition to the mean and variance models, help achieve this objective. Estimation of these quantities is not easy and few methods are able to quantity the estimation uncertainty. In this article, we show that the use of an orthonormal polynomial model of the simulator leads to simple formulas for functional ANOVA and variance decompositions, and the mean and variance models. We show that estimation uncertainty can be taken into account in a simple way by first fitting a Gaussian process model to experiment data and then approximating it with the orthonormal polynomial model. This leads to a joint normal distribution for the polynomial coefficients that quantifies estimation uncertainty. Supplementary materials for this article are available online.  相似文献   

13.
To identify the robust settings of the control factors, it is very important to understand how they interact with the noise factors. In this article, we propose space-filling designs for computer experiments that are more capable of accurately estimating the control-by-noise interactions. Moreover, the existing space-filling designs focus on uniformly distributing the points in the design space, which are not suitable for noise factors because they usually follow nonuniform distributions such as normal distribution. This would suggest placing more points in the regions with high probability mass. However, noise factors also tend to have a smooth relationship with the response and therefore, placing more points toward the tails of the distribution is also useful for accurately estimating the relationship. These two opposing effects make the experimental design methodology a challenging problem. We propose optimal and computationally efficient solutions to this problem and demonstrate their advantages using simulated examples and a real industry example involving a manufacturing packing line. Supplementary materials for the article are available online.  相似文献   

14.
Routine applications of design of experiments (DOE) by non‐mathematicians assume that each response value is static in nature, i.e. with an expected value that is constant for a given set of input factor settings. When this assumption is not valid, it is important to capture the dynamic characteristics of the response for effective process or system characterization, monitoring, and control. With the purpose of recognizing the self‐changing nature of the response owing to factors other than those built into the DOE, thereby gaining a better ability to shape the behavior of the response, this paper describes the reasoning and procedure needed for such ‘parametric responses’, via common techniques of mathematical modeling and optimization. The procedure is intuitive but essential and useful in DOE studies as these become increasingly popular by practitioners in the context of improvement projects such as those related to Six Sigma or stand‐alone performance optimization initiatives. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Abstract

Multivariate testing is a popular method to improve websites, mobile apps, and email campaigns. A unique aspect of testing in the online space is that it needs to be conducted across multiple platforms such as a desktop and a smartphone. The existing experimental design literature does not offer precise guidance for such a multi-platform context. In this article, we introduce a multi-platform design framework that allows us to measure the effect of the design factors for each platform and the interaction effect of the design factors with platforms. Substantively, the resulting designs are of great importance for testing digital campaigns across platforms. We illustrate this in an empirical email application to maximize engagement for a digital magazine. We introduce a novel “sliced effect hierarchy principle” and develop design criteria to generate factorial designs for multi-platform experiments. To help construct such designs, we prove a theorem that connects the proposed designs to the well-known minimum aberration designs. We find that experimental versions made for one platform should be similar to other platforms. From the standpoint of real-world application, such homogeneous subdesigns are cheaper to implement. To assist practitioners, we provide an algorithm to construct the designs that we propose.  相似文献   

16.
450mm晶圆刻蚀机开发中大量应用确定性仿真来模拟腔室内部物理、化学环境,并通过仿真结果指导装备结构的详细设计。为控制仿真试验的采样规模以缩短开发周期,本文详细介绍一种新型的基于采样密度和非线性度的序贯设计方法。此方法通过蒙特卡洛方法,在设计空间中获得采样密度信息,进而对低采样密度区域增加采样点。另外,通过对每个采样的领域进行发掘,以获得采样的梯度和非线性度信息,进而对高度非线性的区域增加采样点。以450mm刻蚀机约束环设计模型和Goldstein-Price模型为背景,采用拉丁超立方和新型序贯设计方法同时采样,以代理模型精度和特征捕捉能力两个角度来对比采样结果的优劣,结果证明,在达到同样精度的前提下,新型序贯设计方法能有效减小采样规模,符合刻蚀装备设计的需要。  相似文献   

17.
This article is motivated by a computer experiment conducted for optimizing residual stresses in the machining of metals. Although kriging is widely used in the analysis of computer experiments, it cannot be easily applied to model the residual stresses because they are obtained as a profile. The high dimensionality caused by this functional response introduces severe computational challenges in kriging. It is well known that if the functional data are observed on a regular grid, the computations can be simplified using an application of Kronecker products. However, the case of irregular grid is quite complex. In this article, we develop a Gibbs sampling-based expectation maximization algorithm, which converts the irregularly spaced data into a regular grid so that the Kronecker product-based approach can be employed for efficiently fitting a kriging model to the functional data. Supplementary materials are available online.  相似文献   

18.
19.
Classical D‐optimal design is used to create experimental designs for situations in which an underlying system model is known or assumed known. The D‐optimal strategy can also be used to add additional experimental runs to an existing design. This paper demonstrates a study of variable choices related to sequential D‐optimal design and how those choices influence the D‐efficiency of the resulting complete design. The variables studied are total sample size, initial experimental design size, step size, whether or not to include center points in the initial design, and complexity of initial model assumption. The results indicate that increasing total sample size improves the D‐efficiency of the design, less effort should be placed in the initial design, especially when the true underlying system model isn't known, and it is better to start off with assuming a simpler model form, rather than a complex model, assuming that the experimenter can reach the true model form during the sequential experiments. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Laser deposition of materials represents a modern additive technology that has a number of advantages over remaining technologies for depositing metallic materials. Besides a low-energy input, a quality bond, and minimal heat-affected zone, this technology is also characterized by the good mechanical properties of the deposited material that is a result of rapid cooling. Despite the prospects, this technology is still at the developing phase. New materials and techniques for determining optimal process parameters are being introduced. In this article, we developed a system for modeling (predicting) the properties of the deposited material and used design of experiments (DOE) for the laser cladding process parameter selection. Based on the experimental data obtained during cladding process, models were made for predicting the volume and roughness of the deposited material. Genetic programming was used for modeling the process. Then, a set of several thousand possible combinations (settings) of the machine parameters was produced on the basis of the obtained model. The most appropriate machine (process) parameters were selected in terms of deposition speed, powder efficiency, and surface roughness. These parameters were determined by nondominated sorting. The results offer the operator of the machine a set of appropriate process parameters that enable the production of high-quality products.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号