首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
In recent years, Bayesian model updating techniques based on measured data have been applied to system identification of structures and to structural health monitoring. A fully probabilistic Bayesian model updating approach provides a robust and rigorous framework for these applications due to its ability to characterize modeling uncertainties associated with the underlying structural system and to its exclusive foundation on the probability axioms. The plausibility of each structural model within a set of possible models, given the measured data, is quantified by the joint posterior probability density function of the model parameters. This Bayesian approach requires the evaluation of multidimensional integrals, and this usually cannot be done analytically. Recently, some Markov chain Monte Carlo simulation methods have been developed to solve the Bayesian model updating problem. However, in general, the efficiency of these proposed approaches is adversely affected by the dimension of the model parameter space. In this paper, the Hybrid Monte Carlo method is investigated (also known as Hamiltonian Markov chain method), and we show how it can be used to solve higher-dimensional Bayesian model updating problems. Practical issues for the feasibility of the Hybrid Monte Carlo method to such problems are addressed, and improvements are proposed to make it more effective and efficient for solving such model updating problems. New formulae for Markov chain convergence assessment are derived. The effectiveness of the proposed approach for Bayesian model updating of structural dynamic models with many uncertain parameters is illustrated with a simulated data example involving a ten-story building that has 31 model parameters to be updated.  相似文献   

2.
A Bayesian probabilistic methodology for structural health monitoring is presented. The method uses a sequence of identified modal parameter data sets to compute the probability that continually updated model stiffness parameters are less than a specified fraction of the corresponding initial model stiffness parameters. In this approach, a high likelihood of reduction in model stiffness at a location is taken as a proxy for damage at the corresponding structural location. The concept extends the idea of using as indicators of damage the changes in structural model parameters that are identified from modal parameter data sets when the structure is initially in an undamaged state and then later in a possibly damaged state. The extension is needed, since effects such as variation in the identified modal parameters in the absence of damage, as well as unavoidable model error, lead to uncertainties in the updated model parameters that in practice obscure health assessment. The method is illustrated by simulating on-line monitoring, wherein specified modal parameters are identified on a regular basis and the probability of damage for each substructure is continually updated.  相似文献   

3.
PURPOSE: We explore use of "bootstrapping" methods to obtain a measure of reliability of predictions made in part from fits of individual drug level data with a pharmacokinetic (PK) model, and to help clarify parameter identifiability for such models. METHODS: Simulation studies use four sets (A-D) of drug concentration data obtained following a single oral dose. Each set is fit with a two compartment PK model, and the "bootstrap" is employed to examine the potential predictive variation in estimates of parameter sets. This yields an empirical distribution of plausible steady state (SS) drug concentration predictions that can be used to form a confidence interval for a prediction. RESULTS: A distinct, narrow confidence region in parameter space is identified for subjects A and B. The bootstrapped sets have a relatively large coefficient of variation (CV) (35-90% for A), yet the corresponding SS drug levels are tightly clustered (CVs only 2-9%). The results for C and D are dramatically different. The CVs for both the parameters and predicted drug levels are larger by a factor of 5 and more. The results reveal that the original data for C and D, but not A and B, can be represented by at least two different PK model manifestations, yet only one provides reliable predictions. CONCLUSIONS: The insights gained can facilitate making decisions about parameter identifiability. In particular, the results for C and D have important implications for the degree of implicit overparameterization that may exist in the PK model. In cases where the data support only a single model manifestation, the "bootstrap" method provides information needed to form a confidence interval for a prediction.  相似文献   

4.
A Bayesian framework incorporating Markov chain Monte Carlo (MCMC) for updating the parameters of a sediment entrainment model is presented. Three subjects were pursued in this study. First, sensitivity analyses were performed via univariate MCMC. The results reveal that the posteriors resulting from two- and three-chain MCMC were not significantly different; two-chain MCMC converged faster than three chains. The proposal scale factor significantly affects the rate of convergence, but not the posteriors. The sampler outputs resulting from informed priors converged faster than those resulting from uninformed priors. The correlation coefficient of the Gram–Charlier (GC) probability density function (PDF) is a physical constraint imposed on MCMC in which a higher correlation would slow the rate of convergence. The results also indicate that the parameter uncertainty is reduced with increasing number of input data. Second, multivariate MCMC were carried out to simultaneously update the velocity coefficient C and the statistical moments of the GC PDF. For fully rough flows, the distribution of C was significantly modified via multivariate MCMC. However, for transitional regimes the posterior values of C resulting from univariate and multivariate MCMC were not significantly different. For both rough and transitional regimes, the differences between the prior and posterior distributions of the statistical moments were limited. Third, the practical effect of updated parameters on the prediction of entrainment probabilities was demonstrated. With all the parameters updated, the sediment entrainment model was able to compute more accurately and realistically the entrainment probabilities. The present work offers an alternative approach to estimating the hydraulic parameters not easily observed.  相似文献   

5.
Quantitatively directed exploration (QDE) employs a first-order Taylor series expansion to combine sensitivity of a 3D finite-element model (FEM) and uncertainty in geologic data to calculate the variance in project performance, which is employed to direct exploration. This approach is made practical by calculating model sensitivity with direct differentiation of the engineering analysis code, thus producing sensitivity with a single model run rather than multiple runs required by parameter perturbation. Uncertainty in subsurface data is computed through two different extrapolation methods for comparison: kriging and conditional probability (Bayesian updating). Although either of these methods can be employed in QDE, conditional probability is required to quantifiably terminate exploration. The QDE framework is applicable to any subsurface analysis that employs a 3D FEM. A case study illustrates the QDE approach, where settlement is the performance criterion, and layer interface elevations are the uncertain geologic data. Additional boring locations identified by QDE were placed where a combination of model sensitivity and subsurface uncertainty was the greatest, thus directing exploration toward the building footprint and away from existing sampled points.  相似文献   

6.
New techniques for both finite-element model updating and damage localization are presented using multiresponse nondestructive test (NDT) data. A new protocol for combining multiple parameter estimation algorithms for model updating is presented along with an illustrative example. This approach allows for the simultaneous use of both static and modal NDT data to perform model updating at the element level. A new damage index based on multiresponse NDT data is presented for damage localization of structures. This index is based on static and modal strain energy changes in a structure as a result of damage. This method depicts changes in physical properties of each structural element compared to its initial state using NDT data. Deficient or potentially damaged structural elements are then selected as the unknown parameters to be updated by parameter estimation. Error function normalization, error function stacking, and multiresponse parameter estimation methods are proposed for using multiple data types for simultaneous stiffness and mass parameter estimation. Also, multiple sets of measurements with various sizes and missing data points can be utilized. This paper uses a laboratory grid model of a bridge deck built at the University of Cincinnati Infrastructure Institute and the corresponding NDT data for validation of the above damage localization and model updating methods. Multiresponse parameter estimation has been utilized to update the stiffness of bearing pads, and both the stiffness and mass of the connections, using static and dynamic NDT data. The static and modal responses of the updated grid model presented a closer match with the NDT data than the responses from the initial model.  相似文献   

7.
Classification experiments were designed to compare the predictions of a linear decision bound model with those of an exemplar-similarity model incorporating an explicit selective attention mechanism. Linear boundaries could account for the data only in tasks involving separable dimension stimuli and where the boundary separating the categories was orthogonal to the psychological dimensions. Linear boundaries provided poor fits to the classification data in situations involving integral dimensions or when the boundary needed to be oriented in oblique directions in the space. The results were consistent with the selection-attention assumptions embodied in the exemplar model. It was argued that similar assumptions about selective attention need to be incorporated within decision bound models. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

8.
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model. A special case of the mixture latent Markov model, the so-called mover-stayer model, is used in this study. Unconditional and conditional models are estimated for the manifest Markov model and the latent Markov model, where the conditional models include a measure of poverty status. Issues of model specification, estimation, and testing using the Mplus software environment are briefly discussed, and the Mplus input syntax is provided. The author applies these 4 methods to a single example of stage-sequential development in reading competency in the early school years, using data from the Early Childhood Longitudinal Study--Kindergarten Cohort. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

9.
In the present work, a model is developed to predict the rheological behavior of an Al-alloy (A356) in semisolid state where the alloy is sheared between two parallel plates during continuous cooling. The flow field is represented by the momentum conservation equation where the non-Newtonian behavior of the semisolid alloy is incorporated considering the Herschel?CBulkley model. In the slurry, the agglomeration and de-agglomeration phenomena of the suspended particles under shear are represented using a time dependent structural parameter. The temperature field during cooling is predicted considering the transient energy conservation equation, and hence the fraction of solid and the yield stress of the semisolid alloy are continuously updated. Considering an apparent viscosity of the semisolid alloy as a function of structural parameter, shear stress and shear rate, the governing equations are solved analytically. Finally, the work predicts the variation of the apparent viscosity of the semisolid A356 alloy with fraction of solid. At first, the present prediction is validated against an available experimental data and, thereafter, the work predicts the effect of process parameters such as shear rate and cooling rate on the apparent viscosity.  相似文献   

10.
Because a structure’s modal parameters (natural frequencies and mode shapes) are affected by structural damage, finite- element model updating techniques are often applied to locate and quantify structural damage. However, the dynamic behavior of a structure can only be observed in a narrow knowledge space, which usually causes nonuniqueness and ill-posedness in the damage detection problem formulation. Thus, advanced optimization techniques are a necessary tool for solving such a complex inverse problem. Furthermore, a preselection process of the most significant damage parameters is helpful to improve the efficiency of the damage detection procedure. A new approach, which combines a parameter subset selection process with the application of damage functions is proposed herein to accomplish this task. Starting with a simple 1D beam, this paper first demonstrates several essential concepts related to the proposed model updating approach. A more advanced example considering a 2D model is then considered. To determine the capabilities of this approach for more complex structures, a trust region-based optimization method is adopted to solve the corresponding nonlinear minimization problem. The objective is to provide an improved robust solution to this challenging problem.  相似文献   

11.
This paper presents a newly developed simulation-based approach for Bayesian model updating, model class selection, and model averaging called the transitional Markov chain Monte Carlo (TMCMC) approach. The idea behind TMCMC is to avoid the problem of sampling from difficult target probability density functions (PDFs) but sampling from a series of intermediate PDFs that converge to the target PDF and are easier to sample. The TMCMC approach is motivated by the adaptive Metropolis–Hastings method developed by Beck and Au in 2002 and is based on Markov chain Monte Carlo. It is shown that TMCMC is able to draw samples from some difficult PDFs (e.g., multimodal PDFs, very peaked PDFs, and PDFs with flat manifold). The TMCMC approach can also estimate evidence of the chosen probabilistic model class conditioning on the measured data, a key component for Bayesian model class selection and model averaging. Three examples are used to demonstrate the effectiveness of the TMCMC approach in Bayesian model updating, model class selection, and model averaging.  相似文献   

12.
This paper addresses the trajectory tracking control of a nonholonomic wheeled mobile manipulator with parameter uncertainties and disturbances. The proposed algorithm adopts a robust adaptive control strategy where parametric uncertainties are compensated by adaptive update techniques and the disturbances are suppressed. A kinematic controller is first designed to make the robot follow a desired end-effector and platform trajectories in task space coordinates simultaneously. Then, an adaptive control scheme is proposed, which ensures that the trajectories are accurately tracked even in the presence of external disturbances and uncertainties. The system stability and thc convergence of tracking errors to zero are rigorously proven using Lyapunov theory. Simulations results are given to illustrate the effectiveness of the proposed robust adaptive control law in comparison with a sliding mode controller.  相似文献   

13.
The problem of identification of the modal parameters of a structural model using complete input and incomplete response time histories is addressed. It is assumed that there exist both input error (due to input measurement noise) and output error (due to output measurement noise and modeling error). These errors are modeled by independent white noise processes, and contribute towards uncertainty in the identification of the modal parameters of the model. To explicitly treat these uncertainties, a Bayesian framework is adopted and a Bayesian time-domain methodology for modal updating based on an approximate conditional probability expansion is presented. The methodology allows one to obtain not only the optimal (most probable) values of the updated modal parameters but also their uncertainties, calculated from their joint probability distribution. Calculation of the uncertainties of the identified modal parameters is very important if one plans to proceed with the updating of a theoretical finite-element model based on these modal estimates. The proposed approach requires only one set of excitation and corresponding response data. It is found that the updated probability density function (PDF) can be well approximated by a Gaussian distribution centered at the optimal parameters at which the posterior PDF is maximized. Numerical examples using noisy simulated data are presented to illustrate the proposed method.  相似文献   

14.
Models of soil water transport often calculate conductivity K from the water retention curve (WRC). Residual water content (θr) has been defined as θ where K = 0. When nonisothermal, coupled vapor and liquid water transport are considered, θr>0 fails because vapor transport often reduces θ to near zero. The author’s objective was to test a model that used unsaturated K(θ) with θ dependence typical of θr>0, while a WRC with θr = 0 was used elsewhere in the model. The system was a closed column of steady state, unsaturated, nonisothermal fine quartz sand with temperature (T) ranging from 5 to 40°C. Soil parameters were adjusted to simulate replicated experimental data from one initial θ condition. The model predicted θ and T within the range of the experimental data and reproduced the sharp drying front. It also satisfactorily modeled experiments with several different initial θ. Model heat flux predictions averaged 11% more than measured values. Experiments performed with two soil column lengths were not substantially different.  相似文献   

15.
A process-based erosion model is used to study parameterization problems of sediment entrainment equations in overland flow areas. One of the equations for entrainment by flow is developed based on a theory of excess stream power, while the other two relate to excess hydraulic shear. The investigation is conducted in two steps. The first step examines parameter optimization for simulated data sets where the parameter values are known. In the second step, parameter optimization for the most robust equation is examined using experimental data from rainfall simulator plots. Results demonstrate that although the model is capable of estimating total sediment yields with relatively small errors in parameter estimates, the converse is true when the optimization is performed for sediment concentrations. Although sediment yields calculated from simulated sediment concentrations match well with observed data, the parameter estimates generally underestimate sediment concentrations on the rising limb of the sediment graphs, and they overestimate them on the falling limb. This difficulty might be related to structural problems in the model, and unique solutions for parameter estimates cannot be obtained.  相似文献   

16.
A reliability-based structural control design approach is presented that optimizes a control system explicitly to minimize the probability of structural failure. Failure is interpreted as the system’s state trajectory exiting a safe region within a given time duration. This safe region is bounded by hyperplanes in the system state space, each of them corresponding to an important response quantity. An efficient approximation is discussed for the analytical evaluation of this probability, and for its optimization through feedback control. This analytical approximation facilitates theoretical discussions regarding the characteristics of reliability-optimal controllers. Versions of the controller design are described for the case using a nominal model of the system, as well as for the case with uncertain model parameters. For the latter case, knowledge about the relative plausibility of the different possible values of the uncertain parameters is quantified through the use of probability distributions on the uncertain parameter space. The influence of the excitation time duration on feedback control design is discussed and a probabilistic treatment of this time duration is suggested. The relationship to H2 (i.e., minimum variance) controller synthesis is also examined.  相似文献   

17.
A Bayesian inference methodology using a Markov chain Monte Carlo (MCMC) sampling procedure is presented for estimating the parameters of computational structural models. This methodology combines prior information, measured data, and forward models to produce a posterior distribution for the system parameters of structural models that is most consistent with all available data. The MCMC procedure is based upon a Metropolis-Hastings algorithm that is shown to function effectively with noisy data, incomplete data sets, and mismatched computational nodes/measurement points. A series of numerical test cases based upon a cantilever beam is presented. The results demonstrate that the algorithm is able to estimate model parameters utilizing experimental data for the nodal displacements resulting from specified forces.  相似文献   

18.
A 130-residue fragment (D1-D4) taken from a fibronectin-binding protein of Staphylococcus aureus, which contains four fibronectin-binding repeats and is unfolded but biologically active at neutral pH, has been studied extensively by NMR spectroscopy. Using heteronuclear multidimensional techniques, the conformational properties of D1-D4 have been defined at both a global and a local level. Diffusion studies give an average effective radius of 26.2 +/- 0.1 A, approximately 75% larger than that expected for a globular protein of this size. Analysis of chemical shift, 3JHNalpha coupling constant, and NOE data show that the experimental parameters agree well overall with values measured in short model peptides and with predictions from a statistical model for a random coil. Sequences where specific features give deviations from these predictions for a random coil have however been identified. These arise from clustering of hydrophobic side chains and electrostatic interactions between charged groups. 15N relaxation studies demonstrate that local fluctuations of the chain are the dominant motional process that gives rise to relaxation of the 15N nuclei, with a persistence length of approximately 7-10 residues for the segmental motion. The consequences of the structural and dynamical properties of this unfolded protein for its biological role of binding to fibronectin have been considered. It is found that the regions of the sequence involved in binding have a high propensity for populating extended conformations, a feature that would allow a number of both charged and hydrophobic groups to be presented to fibronectin for highly specific binding.  相似文献   

19.
This paper presents a three-dimensional (3D) and two-dimensional (2D) numerical analysis of a case study of a combined vacuum and surcharge preloading project for a storage yard at Tianjin Port, China. At this site, a vacuum pressure of 80?kPa and a fill surcharge of 50?kPa were applied on top of the 20-m-thick soft soil layer through prefabricated vertical drains (PVD) to achieve the desired settlements and to avoid embankment instability. In 3D analysis, the actual shape of PVDs and their installation pattern with the in situ soil parameters were simulated. In contrast, the validity of 2D plane strain analysis using equivalent permeability and transformed unit cell geometry was examined. In both cases, the vacuum pressure along the drain length was assumed to be constant as substantiated by the field observations. The finite-element code, ABAQUS, using the modified Cam-clay model was used in the numerical analysis. The predictions of settlement, pore-water pressure, and lateral displacement were compared with the available field data, and an acceptable agreement was achieved for both 2D and 3D numerical analyses. It is found that both 3D and equivalent 2D analyses give similar consolidation responses at the vertical cross section where the lateral strain along the longitudinal axis is zero. The influence of vacuum may extend more than 10?m from the embankment toe, where the lateral movement should be monitored carefully during the consolidation period to avoid any damage to adjacent structures.  相似文献   

20.
This paper investigates the mechanisms for history-dependent probability of eight-nerve discharge, which is modeled as the probability that the excitatory postsynaptic potential (EPSP) process crosses afferent membrane threshold, with the discharge history dependence due to the dependence of postsynaptic threshold voltage on time since previous action potential. The model parameters are the Poisson intensity alpha t of vesicle release, the duration epsilon and probability density PV(upsilon) of single-vesicle EPSP's, and the threshold voltage curve theta (tau) for spiking. It is proven that the infinitesimal conditional probabilities of discharge exhibit two distinct behaviors. The first is associated with the time tau = T D, exactly the time the neuron releases from absolute refractory where there is no intensity [theta(tau) = infinity, for tau < T D]. At this time the neuron has a nonzero probability of discharge [symbol:see text] (T D) = lim delta-->0 Pr(Nt,t+delta = 1/t - wNt = T D). The second regime corresponds to the time since previous spike being greater than dead time, tau > T D, during which time the intensity exists lambda t(tau) = lim delta-->0(1/delta) Pr(Nt,t+delta = 1/t - wNt = tau > T D). The fact that there is a nonzero probability of discharge following passage from the absolute refractory period predicts the nonmonotonic hazard intensity seen in high spontaneous neurons [R. P. Gaumond, Ph.D. thesis, Washington University, St. Louis (1980)] and high driven rate neurons. It is shown that for the lowest range of vesicle release intensities where the vesicle-release-rate/membrane-integration-time product alpha t epsilon small, the nonzero probability of discharge at a point is approximately equal to 0. The discharge intensity is dominated by a term linear in vesicle release intensity: lambda t(tau) approximately alpha t exp(-integral of t-epsilon t alpha sigma d sigma) integral of theta (tau) infinity Pv(upsilon)d upsilon. This is precisely the Siebert-Gaumond intensity product form with monotonic recovered discharge probability. At high vesicle release rates, such as for driven rate responses, the nonzero probability of discharging at a point becomes of nonsignificant size, and the intensity of discharge grows nonlinearly with alpha t, implying the product model does not hold. The model is demonstrated via the analysis of auditory nerve fibers from the cat.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号