首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 58 毫秒
1.
New model fusion techniques based on spatial‐random‐process modeling are developed in this work for combining multi‐fidelity data from simulations and experiments. Existing works in multi‐fidelity modeling generally assume a hierarchical structure in which the levels of fidelity of the simulation models can be clearly ranked. In contrast, we consider the nonhierarchical situation in which one wishes to incorporate multiple models whose levels of fidelity are unknown or cannot be differentiated (e.g., if the fidelity of the models changes over the input domain). We propose three new nonhierarchical multi‐model fusion approaches with different assumptions or structures regarding the relationships between the simulation models and physical observations. One approach models the true response as a weighted sum of the multiple simulation models and a single discrepancy function. The other two approaches model the true response as the sum of one simulation model and a corresponding discrepancy function, and differ in their assumptions regarding the statistical behavior of the discrepancy functions, such as independence with the true response or a common spatial correlation function. The proposed approaches are compared via numerical examples and a real engineering application. Furthermore, the effectiveness and relative merits of the different approaches are discussed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
In this work, a 2D finite element (FE) formulation for a multi‐layer beam with arbitrary number of layers with interconnection that allows for mixed‐mode delamination is presented. The layers are modelled as linear beams, while interface elements with embedded cohesive‐zone model are used for the interconnection. Because the interface elements are sandwiched between beam FEs and attached to their nodes, the only basic unknown functions of the system are two components of the displacement vector and a cross‐sectional rotation per layer. Damage in the interface is modelled via a bi‐linear constitutive law for a single delamination mode and a mixed‐mode damage evolution law. Because in a numerical integration procedure, the damage occurs only in discrete integration points (i.e. not continuously), the solution procedure experiences sharp snap backs in the force‐displacements diagram. A modified arc‐length method is used to solve this problem. The present model is verified against commonly used models, which use 2D plane‐strain FEs for the bulk material. Various numerical examples show that the multi‐layer beam model presented gives accurate results using significantly less degrees of freedom in comparison with standard models from the literature. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

3.
This work deals with the identification of constitutive parameters by inverse methodology. Two different approaches are presented and analysed: the single-point and FE analysis. The use of these two different methodologies for the evaluation of objective functions in the identification process is still an open question and the interest in this field has been increasing among the metal forming community. To discuss this issue, two different constitutive models suitable for metals were used, i.e. an elastoplastic hardening model and an elastoplastic model with isotropic and kinematic hardening. The determined material parameters for the two models, the respective objective function values and the CPU time required to perform the simulations are presented and discussed.  相似文献   

4.
Yan Cui  Fukang Zhu 《TEST》2018,27(2):428-452
Univariate integer-valued time series models, including integer-valued autoregressive (INAR) models and integer-valued generalized autoregressive conditional heteroscedastic (INGARCH) models, have been well studied in the literature, but there is little progress in multivariate models. Although some multivariate INAR models were proposed, they do not provide enough flexibility in modeling count data, such as volatility of numbers of stock transactions. Then, a bivariate Poisson INGARCH model was suggested by Liu (Some models for time series of counts, Dissertations, Columbia University, 2012), but it can only deal with positive cross-correlation between two components. To remedy this defect, we propose a new bivariate Poisson INGARCH model, which is more flexible and allows for positive or negative cross-correlation. Stationarity and ergodicity of the new process are established. The maximum likelihood method is used to estimate the unknown parameters, and consistency and asymptotic normality for estimators are given. A simulation study is given to evaluate the estimators for parameters of interest. Real and artificial data examples are illustrated to demonstrate good performances of the proposed model relative to the existing model.  相似文献   

5.
One of the basic assumptions for traditional univariate and multivariate control charts is that the data are independent in time. For the latter, in many cases, the data are serially dependent (autocorrelated) and cross‐correlated because of, for example, frequent sampling and process dynamics. It is well known that the autocorrelation affects the false alarm rate and the shift‐detection ability of the traditional univariate control charts. However, how the false alarm rate and the shift‐detection ability of the Hotelling T2 control chart are affected by various autocorrelation and cross‐correlation structures for different magnitudes of shifts in the process mean is not fully explored in the literature. In this article, the performance of the Hotelling T2 control chart for different shift sizes and various autocorrelation and cross‐correlation structures are compared based on the average run length using simulated data. Three different approaches in constructing the Hotelling T2 chart are studied for two different estimates of the covariance matrix: (i) ignoring the autocorrelation and using the raw data with theoretical upper control limits; (ii) ignoring the autocorrelation and using the raw data with adjusted control limits calculated through Monte Carlo simulations; and (iii) constructing the control chart for the residuals from a multivariate time series model fitted to the raw data. To limit the complexity, we use a first‐order vector autoregressive process and focus mainly on bivariate data. © 2014 The Authors. Quality and Reliability Engineering International Published by John Wiley & Sons Ltd.  相似文献   

6.
This paper introduces a family of stationary multivariate spatial random fields with D scalar components that extend the scalar model of Gibbs random fields with local interactions (i.e., Spartan spatial random fields). We derive permissibility conditions for Spartan multivariate spatial random fields with a specific structure of local interactions. We also present explicit expressions for the respective matrix covariance functions obtained at the limit of infinite spectral cutoff in one, two and three spatial dimensions. Finally, we illustrate the proposed covariance models by means of simulated bivariate time series and two-dimensional random fields.  相似文献   

7.
The incremental harmonic balance method with multiple time variables is developed for analysis of almost periodic oscillations in multi‐degree‐of‐freedom dynamical systems with cubic non‐linearities, subjected to the external multi‐tone excitation. The method is formulated to treat non‐autonomous as well as autonomous dynamical systems. The almost periodic oscillations, which coexist with periodic oscillations in a rotating system model with cubic restoring force and an electromagnetic eddy‐current damper are analysed. The closed form solutions based on generalized Fourier series containing two incommensurate frequencies are obtained in the case of small non‐dimensional stiffness ratio. Almost periodic oscillations of a rotating system model in dependence on variable parameters are also analysed, where solutions are computed through an augmentation process including a greater number of harmonics and combination frequencies involved. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
Reliability evaluation based on degradation data has received significant attentions in recent years. However, existing works often assume that the degradation evolution over time is governed by a single stochastic process, which may not be realistic if change points exist. Here, for cases of degradation with change points, this paper attempts to capture the degradation process with a multi‐phase degradation model and find the method to evaluate the real‐time reliability of the product being monitored. Once new degradation information becomes available, the evaluation results are adaptively updated through the Bayesian method. In particular, for a two‐stage degradation process of liquid coupling devices (LCDs), a model named as change‐point gamma and Wiener process is developed, after which issues of real‐time reliability evaluation and parameters’ estimation are addressed in detail. Finally, the proposed method is illustrated by a case study of LCDs, and the corresponding results indicate that trustful evaluation results depend on the fitting accuracy in cases of multi‐phase degradation process. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
10.
Functional organic materials with enhanced two‐photon absorption lead to new technologies in the fields of chemistry, biology, and photonics. In this article we review experimental and theoretical methodologies allowing detailed investigation and analysis of two‐photon absorption properties of organic chromophores. This includes femtosecond two‐photon excited fluorescence experimental setups and quantum‐chemical methodologies based on time‐dependent density functional theory. We thoroughly analyze physical phenomena and trends leading to large two‐photon absorption responses of a few series of model chromophores focusing on the effects of symmetric and asymmetric donor/acceptor substitution and branching.  相似文献   

11.
In this article, we study the performance of the dual reciprocity multi‐domains approach (DRM‐MD) when the shape functions of the boundary elements, for both the approximation of the geometry and the surface variables of the governing equations, are quadratic functions. A series of tests are carried out to study the consistency of the proposed boundary integral technique. For this purpose a limiting process of the subdivision of the domain is performed, reporting a comparison of the computed solutions for every refining scheme. Furthermore, the DRM‐MD is solved in its dual reciprocity approximation using two different radial basis interpolation functions, the conical function r plus a constant, i.e. (1+r), and the augmented thin plate spline. Special attention is paid to the contrast between numerical results yielded by the DRM‐MD approach using linear and quadratic boundary elements towards a full understanding of its convergence behaviour. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
Dynamic processes exhibit a time delay between the disturbances and the resulting process response. Therefore, one has to acknowledge process dynamics, such as transition times, when planning and analyzing experiments in dynamic processes. In this article, we explore, discuss, and compare different methods to estimate location effects for two‐level factorial experiments where the responses are represented by time series. Particularly, we outline the use of intervention‐noise modeling to estimate the effects and to compare this method by using the averages of the response observations in each run as the single response. The comparisons are made by simulated experiments using a dynamic continuous process model. The results show that the effect estimates for the different analysis methods are similar. Using the average of the response in each run, but removing the transition time, is found to be a competitive, robust, and straightforward method, whereas intervention‐noise models are found to be more comprehensive, render slightly fewer spurious effects, find more of the active effects for unreplicated experiments and provide the possibility to model effect dynamics. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
This paper introduces multivariate input‐output models to predict the errors and bases dimensions of local parametric Proper Orthogonal Decomposition reduced‐order models. We refer to these mappings as the multivariate predictions of local reduced‐order model characteristics (MP‐LROM) models. We use Gaussian processes and artificial neural networks to construct approximations of these multivariate mappings. Numerical results with a viscous Burgers model illustrate the performance and potential of the machine learning‐based regression MP‐LROM models to approximate the characteristics of parametric local reduced‐order models. The predicted reduced‐order models errors are compared against the multifidelity correction and reduced‐order model error surrogates methods predictions, whereas the predicted reduced‐order dimensions are tested against the standard method based on the spectrum of snapshots matrix. Since the MP‐LROM models incorporate more features and elements to construct the probabilistic mappings, they achieve more accurate results. However, for high‐dimensional parametric spaces, the MP‐LROM models might suffer from the curse of dimensionality. Scalability challenges of MP‐LROM models and the feasible ways of addressing them are also discussed in this study.  相似文献   

14.
The numerical modelling of interacting acoustic media by boundary element method–finite element method (BEM–FEM) coupling procedures is discussed here, taking into account time‐domain approaches. In this study, the global model is divided into different sub‐domains and each sub‐domain is analysed independently (considering BEM or FEM discretizations): the interaction between the different sub‐domains of the global model is accomplished by interface procedures. Numerical formulations based on FEM explicit and implicit time‐marching schemes are discussed, resulting in direct and optimized iterative BEM–FEM coupling techniques. A multi‐level time‐step algorithm is considered in order to improve the flexibility, accuracy and stability (especially when conditionally stable time‐marching procedures are employed) of the coupled analysis. At the end of the paper, numerical examples are presented, illustrating the potentialities and robustness of the proposed methodologies. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
Using traditional control charts to monitor autocorrelated processes is not beneficial, because it will lead us to misleading detections in the processes. One of the methods used to deal with the control charts for autocorrelated process is the model‐based approach. It uses an adequate time series model that fits the process and uses the residuals as monitoring statistics. For the said purpose, it is important to pick a suitable model that can adequately be used for different designs of control charts under specific time series model. This study intends to do the same for three popular types of charts namely Shewhart, exponentially weighted moving average, and cumulative sum. The models covered in this study include AR(1), MA(1), and ARMA(1,1) as the potential models to fit the process of interest. We have focused on two performance aspects namely efficiency and robustness. Average run length is used as a performance measure for different in‐control and out‐of‐control states of the autocorrelated processes under varying levels of autocorrelation. An application example based on a real data set is also included in the study to highlight the importance of the study proposals. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
As manufacturing transitions to real‐time sensing, it becomes more important to handle multiple, high‐dimensional (non‐stationary) time series that generate thousands of measurements for each batch. Predictive models are often challenged by such high‐dimensional data and it is important to reduce the dimensionality for better performance. With thousands of measurements, even wavelet coefficients do not reduce the dimensionality sufficiently. We propose a two‐stage method that uses energy statistics from a discrete wavelet transform to identify process variables and appropriate resolutions of wavelet coefficients in an initial (screening) model. Variable importance scores from a modern random forest classifier are exploited in this stage. Coefficients that correspond to the identified variables and resolutions are then selected for a second‐stage predictive model. The approach is shown to provide good performance, along with interpretable results, in an example where multiple time series are used to indicate the need for preventive maintenance. In general, the two‐stage approach can handle high dimensionality and still provide interpretable features linked to the relevant process variables and wavelet resolutions that can be used for further analysis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The “mean model” allows to estimate the sensitivity indices of each scalar model inputs, while the “dispersion model” allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.  相似文献   

18.
Micromotor‐mediated synthesis of thread‐like hydrogel microstructures in an aqueous environment is presented. The study utilizes a catalytic micromotor assembly (owing to the presence of a Pt layer), with an on‐board chemical reservoir (i.e., polymerization mixture), toward thread‐like radical‐polymerization via autonomously propelled bots (i.e., TRAP bots). Synergistic coupling of catalytically active Pt layer, together with radical initiators (H2O2 and FeCl3 (III)), and PEGDA monomers preloaded into the TRAP bot, results in the polymerization of monomeric units into elongated thread‐like hydrogel polymers coupled with self‐propulsion. Interestingly, polymer generation via TRAP bots can also be triggered in the absence of hydrogen peroxide for cellular/biomedical application. The resulting polymeric hydrogel microstructures are able to entrap living cells (NIH 3T3 fibroblast cells), and are easily separable via a centrifugation or magnetic separation (owing to the presence of a Ni layer). The cellular biocompatibility of TRAP bots is established via a LIVE/DEAD assay and MTS cell proliferation assay (7 days observation). This is the first study demonstrating real‐time in situ hydrogel polymerization via an artificial microswimmer, capable of enmeshing biotic/abiotic microobjects in its reaction environment, and lays a strong foundation for advanced applications in cell/tissue engineering, drug delivery, and cleaner technologies.  相似文献   

19.
A contact method with friction for the multi‐dimensional Lagrangian step in multi‐material arbitrary Lagrangian–Eulerian (ALE) formulations is presented. In our previous research, the extended finite element method (X‐FEM) was used to create independent fields (i.e. velocity, strain rate, force, mass, etc.) for each material in the problem to model contact without friction. The research presented here includes the extension to friction and improvements to the accuracy and robustness of our previous study. The accelerations of the multi‐material nodes are obtained by coupling the material force and mass fields as a function of the prescribed contact; similarly, the velocities of the multi‐material nodes are recalculated using the conservation of momentum when the prescribed contact requires it. The coupling procedures impose the same nodal velocity on the coupled materials in the direction normal to their interface during the time step update. As a result, the overlap of materials is prevented and unwanted separation does not occur. Three different types of contacts are treated: perfectly bonded, frictionless slip, and slip with friction. Example impact problems are solved and the numerical solutions are presented. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
A computational framework for scale‐bridging in multi‐scale simulations is presented. The framework enables seamless combination of at‐scale models into highly dynamic hierarchies to build a multi‐scale model. Its centerpiece is formulated as a standalone module capable of fully asynchronous operation. We assess its feasibility and performance for a two‐scale model applied to two challenging test problems from impact physics. We find that the computational cost associated with using the framework may, as expected, become substantial. However, the framework has the ability of effortlessly combining at‐scale models to render complex multi‐scale models. The main source of the computational inefficiency of the framework is related to poor load balancing of the lower‐scale model evaluation We demonstrate that the load balancing can be efficiently addressed by recourse to conventional load‐balancing strategies. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号