首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents a novel hybrid polynomial dimensional decomposition (PDD) method for stochastic computing in high-dimensional complex systems. When a stochastic response does not possess a strongly additive or a strongly multiplicative structure alone, then the existing additive and multiplicative PDD methods may not provide a sufficiently accurate probabilistic solution of such a system. To circumvent this problem, a new hybrid PDD method was developed that is based on a linear combination of an additive and a multiplicative PDD approximation, a broad range of orthonormal polynomial bases for Fourier-polynomial expansions of component functions, and a dimension-reduction or sampling technique for estimating the expansion coefficients. Two numerical problems involving mathematical functions or uncertain dynamic systems were solved to study how and when a hybrid PDD is more accurate and efficient than the additive or the multiplicative PDD. The results show that the univariate hybrid PDD method is slightly more expensive than the univariate additive or multiplicative PDD approximations, but it yields significantly more accurate stochastic solutions than the latter two methods. Therefore, the univariate truncation of the hybrid PDD is ideally suited to solving stochastic problems that may otherwise mandate expensive bivariate or higher-variate additive or multiplicative PDD approximations. Finally, a coupled acoustic-structural analysis of a pickup truck subjected to 46 random variables was performed, demonstrating the ability of the new method to solve large-scale engineering problems.  相似文献   

2.
This paper presents a polynomial dimensional decomposition (PDD) method for global sensitivity analysis of stochastic systems subject to independent random input following arbitrary probability distributions. The method involves Fourier-polynomial expansions of lower-variate component functions of a stochastic response by measure-consistent orthonormal polynomial bases, analytical formulae for calculating the global sensitivity indices in terms of the expansion coefficients, and dimension-reduction integration for estimating the expansion coefficients. Due to identical dimensional structures of PDD and analysis-of-variance decomposition, the proposed method facilitates simple and direct calculation of the global sensitivity indices. Numerical results of the global sensitivity indices computed for smooth systems reveal significantly higher convergence rates of the PDD approximation than those from existing methods, including polynomial chaos expansion, random balance design, state-dependent parameter, improved Sobol's method, and sampling-based methods. However, for non-smooth functions, the convergence properties of the PDD solution deteriorate to a great extent, warranting further improvements. The computational complexity of the PDD method is polynomial, as opposed to exponential, thereby alleviating the curse of dimensionality to some extent.  相似文献   

3.
This paper presents three new computational methods for calculating design sensitivities of statistical moments and reliability of high‐dimensional complex systems subject to random input. The first method represents a novel integration of the polynomial dimensional decomposition (PDD) of a multivariate stochastic response function and score functions. Applied to the statistical moments, the method provides mean‐square convergent analytical expressions of design sensitivities of the first two moments of a stochastic response. The second and third methods, relevant to probability distribution or reliability analysis, exploit two distinct combinations built on PDD: the PDD‐saddlepoint approximation (SPA) or PDD‐SPA method, entailing SPA and score functions; and the PDD‐Monte Carlo simulation (MCS) or PDD‐MCS method, utilizing the embedded MCS of the PDD approximation and score functions. For all three methods developed, the statistical moments or failure probabilities and their design sensitivities are both determined concurrently from a single stochastic analysis or simulation. Numerical examples, including a 100‐dimensional mathematical problem, indicate that the new methods developed provide not only theoretically convergent or accurate design sensitivities, but also computationally efficient solutions. A practical example involving robust design optimization of a three‐hole bracket illustrates the usefulness of the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
A nonparametric probabilistic approach for modeling uncertainties in projection‐based, nonlinear, reduced‐order models is presented. When experimental data are available, this approach can also quantify uncertainties in the associated high‐dimensional models. The main underlying idea is twofold. First, to substitute the deterministic reduced‐order basis (ROB) with a stochastic counterpart. Second, to construct the probability measure of the stochastic reduced‐order basis (SROB) on a subset of a compact Stiefel manifold in order to preserve some important properties of a ROB. The stochastic modeling is performed so that the probability distribution of the constructed SROB depends on a small number of hyperparameters. These are determined by solving a reduced‐order statistical inverse problem. The mathematical properties of this novel approach for quantifying model uncertainties are analyzed through theoretical developments and numerical simulations. Its potential is demonstrated through several example problems from computational structural dynamics. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
An anchored analysis of variance (ANOVA) method is proposed in this paper to decompose the statistical moments. Compared to the standard ANOVA with mutually orthogonal component functions, the anchored ANOVA, with an arbitrary choice of the anchor point, loses the orthogonality if employing the same measure. However, an advantage of the anchored ANOVA consists in the considerably reduced number of deterministic solver's computations, which renders the uncertainty quantification of real engineering problems much easier. Different from existing methods, the covariance decomposition of the output variance is used in this work to take account of the interactions between non‐orthogonal components, yielding an exact variance expansion and thus, with a suitable numerical integration method, provides a strategy that converges. This convergence is verified by studying academic tests. In particular, the sensitivity problem of existing methods to the choice of anchor point is analyzed via the Ishigami case, and we point out that covariance decomposition survives from this issue. Also, with a truncated anchored ANOVA expansion, numerical results prove that the proposed approach is less sensitive to the anchor point. The covariance‐based sensitivity indices (SI) are also used, compared to the variance‐based SI. Furthermore, we emphasize that the covariance decomposition can be generalized in a straightforward way to decompose higher‐order moments. For academic problems, results show the method converges to exact solution regarding both the skewness and kurtosis. Finally, the proposed method is applied on a realistic case, that is, estimating the chemical reactions uncertainties in a hypersonic flow around a space vehicle during an atmospheric reentry. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
A dimensional decomposition method is presented for calculating the probabilistic characteristics of complex‐valued eigenvalues and eigenvectors of linear, stochastic, dynamic systems. The method involves a function decomposition allowing lower‐dimensional approximations of eigensolutions, Lagrange interpolation of lower‐dimensional component functions, and Monte Carlo simulation. Compared with the commonly used perturbation method, neither the assumption of small input variability nor the calculation of the derivatives of eigensolutions is required by the method developed. Results of numerical examples from linear stochastic dynamics indicate that the decomposition method provides excellent estimates of the moments and/or probability densities of eigenvalues and eigenvectors for various cases including large statistical variations of input. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
8.
A new generalized probabilistic approach of uncertainties is proposed for computational model in structural linear dynamics and can be extended without difficulty to computational linear vibroacoustics and to computational non‐linear structural dynamics. This method allows the prior probability model of each type of uncertainties (model‐parameter uncertainties and modeling errors) to be separately constructed and identified. The modeling errors are not taken into account with the usual output‐prediction‐error method, but with the nonparametric probabilistic approach of modeling errors recently introduced and based on the use of the random matrix theory. The theory, an identification procedure and a numerical validation are presented. Then a chaos decomposition with random coefficients is proposed to represent the prior probabilistic model of random responses. The random germ is related to the prior probability model of model‐parameter uncertainties. The random coefficients are related to the prior probability model of modeling errors and then depends on the random matrices introduced by the nonparametric probabilistic approach of modeling errors. A validation is presented. Finally, a future perspective is introduced when experimental data are available. The prior probability model of the random coefficients can be improved in constructing a posterior probability model using the Bayesian approach. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
As manufacturing transitions to real‐time sensing, it becomes more important to handle multiple, high‐dimensional (non‐stationary) time series that generate thousands of measurements for each batch. Predictive models are often challenged by such high‐dimensional data and it is important to reduce the dimensionality for better performance. With thousands of measurements, even wavelet coefficients do not reduce the dimensionality sufficiently. We propose a two‐stage method that uses energy statistics from a discrete wavelet transform to identify process variables and appropriate resolutions of wavelet coefficients in an initial (screening) model. Variable importance scores from a modern random forest classifier are exploited in this stage. Coefficients that correspond to the identified variables and resolutions are then selected for a second‐stage predictive model. The approach is shown to provide good performance, along with interpretable results, in an example where multiple time series are used to indicate the need for preventive maintenance. In general, the two‐stage approach can handle high dimensionality and still provide interpretable features linked to the relevant process variables and wavelet resolutions that can be used for further analysis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
11.
Reduced‐order models that are able to approximate output quantities of interest of high‐fidelity computational models over a wide range of input parameters play an important role in making tractable large‐scale optimal design, optimal control, and inverse problem applications. We consider the problem of determining a reduced model of an initial value problem that spans all important initial conditions, and pose the task of determining appropriate training sets for reduced‐basis construction as a sequence of optimization problems. We show that, under certain assumptions, these optimization problems have an explicit solution in the form of an eigenvalue problem, yielding an efficient model reduction algorithm that scales well to systems with states of high dimension. Furthermore, tight upper bounds are given for the error in the outputs of the reduced models. The reduction methodology is demonstrated for a large‐scale contaminant transport problem. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
13.
This work is a first attempt to address efficient stabilizations of high dimensional advection–diffusion models encountered in computational physics. When addressing multidimensional models, the use of mesh‐based discretization fails because the exponential increase of the number of degrees of freedom related to a multidimensional mesh or grid, and alternative discretization strategies are needed. Separated representations involved in the so‐called proper generalized decomposition method are an efficient alternative as proven in our former works; however, the issue related to efficient stabilizations of multidimensional advection–diffusion equations has never been addressed to our knowledge. Thus, this work is aimed at extending some well‐experienced stabilization strategies widely used in the solution of 1D, 2D, or 3D advection–diffusion models to models defined in high‐dimensional spaces, sometimes involving tens of coordinates.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
A new algorithm for the computation of the spectral expansion of the eigenvalues and eigenvectors of a random non‐symmetric matrix is proposed. The algorithm extends the deterministic inverse power method using a spectral discretization approach. The convergence and accuracy of the algorithm is studied for both symmetric and non‐symmetric matrices. The method turns out to be efficient and robust compared to existing methods for the computation of the spectral expansion of random eigenvalues and eigenvectors. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
This paper proposes a risk‐averse formulation for the problem of piezoelectric control of random vibrations of elastic structures. The proposed formulation, inspired by the notion of risk aversion in economy, is applied to the piezoelectric control of a Bernoulli‐Euler beam subjected to uncertainties in its input data. To address the high computational burden associated to the presence of random fields in the model and the discontinuities involved in the cost functional and its gradient, a combination of a nonintrusive anisotropic polynomial chaos approach for uncertainty propagation with a Monte Carlo sampling method is proposed. In the first part, the well‐posedness of the control problem is established by proving the existence of optimal controls. In the second part, an adaptive gradient‐based method is proposed for the numerical resolution of the problem. Several experiments illustrate the performance of the proposed approach and the significant differences that may occur between the classical deterministic formulation of the problem and its stochastic risk‐averse counterpart.  相似文献   

16.
The trend toward deep water energy production has led to a growing use of plate anchors to moor floating production facilities. The effect on anchor uplift behaviour of the inherent spatial variability of soil deposits has so far been little considered, despite having important implications for anchor design. Spatial variability problems are commonly analysed by Monte Carlo simulation but it is difficult to establish the probabilities of failure that are of interest in practice. In this paper, sparse polynomial chaos expansions (SPCEs) are used for moment and reliability analysis of plate anchors in spatially variable undrained clay. A novel two-stage methodology is proposed: in the first stage, an SPCE is constructed to meet a target global error, allowing statistical moments of the uplift capacity to be obtained; in the second stage, an active learning method is used to refine the SPCE for reliability analysis. Anchor uplift capacity is obtained by a finite element method, which is coupled with a random field representation of spatial variability. The effect of embedment depth and the soil-anchor interface is investigated. The failure mechanism of the anchor is shown to have a significant effect on the statistical moments of the uplift capacity and the probability of failure in relation to current design guidelines. To inform future design, factors of safety are presented for a range of failure probabilities.  相似文献   

17.
The repeated or closely spaced eigenvalues and corresponding eigenvectors of a matrix are usually very sensitive to a perturbation of the matrix, which makes capturing the behavior of these eigenpairs very difficult. Similar difficulty is encountered in solving the random eigenvalue problem when a matrix with random elements has a set of clustered eigenvalues in its mean. In addition, the methods to solve the random eigenvalue problem often differ in characterizing the problem, which leads to different interpretations of the solution. Thus, the solutions obtained from different methods become mathematically incomparable. These two issues, the difficulty of solving and the non‐unique characterization, are addressed here. A different approach is used where instead of tracking a few individual eigenpairs, the corresponding invariant subspace is tracked. The spectral stochastic finite element method is used for analysis, where the polynomial chaos expansion is used to represent the random eigenvalues and eigenvectors. However, the main concept of tracking the invariant subspace remains mostly independent of any such representation. The approach is successfully implemented in response prediction of a system with repeated natural frequencies. It is found that tracking only an invariant subspace could be sufficient to build a modal‐based reduced‐order model of the system. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
We present a model reduction approach to the solution of large‐scale statistical inverse problems in a Bayesian inference setting. A key to the model reduction is an efficient representation of the non‐linear terms in the reduced model. To achieve this, we present a formulation that employs masked projection of the discrete equations; that is, we compute an approximation of the non‐linear term using a select subset of interpolation points. Further, through this formulation we show similarities among the existing techniques of gappy proper orthogonal decomposition, missing point estimation, and empirical interpolation via coefficient‐function approximation. The resulting model reduction methodology is applied to a highly non‐linear combustion problem governed by an advection–diffusion‐reaction partial differential equation (PDE). Our reduced model is used as a surrogate for a finite element discretization of the non‐linear PDE within the Markov chain Monte Carlo sampling employed by the Bayesian inference approach. In two spatial dimensions, we show that this approach yields accurate results while reducing the computational cost by several orders of magnitude. For the full three‐dimensional problem, a forward solve using a reduced model that has high fidelity over the input parameter space is more than two million times faster than the full‐order finite element model, making tractable the solution of the statistical inverse problem that would otherwise require many years of CPU time. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a two‐dimensional floating random walk (FRW) algorithm for the solution of the non‐linear Poisson–Boltzmann (NPB) equation. In the past, the FRW method has not been applied to the solution of the NPB equation which can be attributed to the absence of analytical expressions for volumetric Green's functions. Previous studies using the FRW method have examined only the linearized Poisson–Boltzmann equation. No such linearization is needed for the present approach. Approximate volumetric Green's functions have been derived with the help of perturbation theory, and these expressions have been incorporated within the FRW framework. A unique advantage of this algorithm is that it requires no discretization of either the volume or the surface of the problem domains. Furthermore, each random walk is independent, so that the computational procedure is highly parallelizable. In our previous work, we have presented preliminary calculations for one‐dimensional and quasi‐one‐dimensional benchmark problems. In this paper, we present the detailed formulation of a two‐dimensional algorithm, along with extensive finite‐difference validation on fully two‐dimensional benchmark problems. The solution of the NPB equation has many interesting applications, including the modelling of plasma discharges, semiconductor device modelling and the modelling of biomolecular structures and dynamics. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
The modified random‐to‐pattern search (MRPS) algorithm, developed by the authors for global optimization, is applied to find the global optimum of system cost of a complex system subject to constraints on system reliability. The global optimum solutions obtained by MRPS are compared with those obtained by employing gradient techniques as well as random search‐based methods from the literature to solve the same problems. Results clearly indicate that the proposed MRPS algorithm is more robust and efficient in overcoming difficulties associated with local optima and the need for a starting solution vector. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号