首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Non-probabilistic convex models need to be provided only the changing boundary of parameters rather than their exact probability distributions; thus, such models can be applied to uncertainty analysis of complex structures when experimental information is lacking. The interval and the ellipsoidal models are the two most commonly used modeling methods in the field of non-probabilistic convex modeling. However, the former can only deal with independent variables, while the latter can only deal with dependent variables. This paper presents a more general non-probabilistic convex model, the multidimensional parallelepiped model. This model can include the independent and dependent uncertain variables in a unified framework and can effectively deal with complex ‘multi-source uncertainty’ problems in which dependent variables and independent variables coexist. For any two parameters, the concepts of the correlation angle and the correlation coefficient are defined. Through the marginal intervals of all the parameters and also their correlation coefficients, a multidimensional parallelepiped can easily be built as the uncertainty domain for parameters. Through the introduction of affine coordinates, the parallelepiped model in the original parameter space is converted to an interval model in the affine space, thus greatly facilitating subsequent structural uncertainty analysis. The parallelepiped model is applied to structural uncertainty propagation analysis, and the response interval of the structure is obtained in the case of uncertain initial parameters. Finally, the method described in this paper was applied to several numerical examples. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors’ effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made.  相似文献   

3.
This study presents multi-level analyses for single- and multi-vehicle crashes on a mountainous freeway. Data from a 15-mile mountainous freeway section on I-70 were investigated. Both aggregate and disaggregate models for the two crash conditions were developed. Five years of crash data were used in the aggregate investigation, while the disaggregate models utilized one year of crash data along with real-time traffic and weather data. For the aggregate analyses, safety performance functions were developed for the purpose of revealing the contributing factors for each crash type. Two methodologies, a Bayesian bivariate Poisson-lognormal model and a Bayesian hierarchical Poisson model with correlated random effects, were estimated to simultaneously analyze the two crash conditions with consideration of possible correlations. Except for the factors related to geometric characteristics, two exposure parameters (annual average daily traffic and segment length) were included. Two different sets of significant explanatory and exposure variables were identified for the single-vehicle (SV) and multi-vehicle (MV) crashes. It was found that the Bayesian bivariate Poisson-lognormal model is superior to the Bayesian hierarchical Poisson model, the former with a substantially lower DIC and more significant variables. In addition to the aggregate analyses, microscopic real-time crash risk evaluation models were developed for the two crash conditions. Multi-level Bayesian logistic regression models were estimated with the random parameters accounting for seasonal variations, crash-unit-level diversity and segment-level random effects capturing unobserved heterogeneity caused by the geometric characteristics. The model results indicate that the effects of the selected variables on crash occurrence vary across seasons and crash units; and that geometric characteristic variables contribute to the segment variations: the more unobserved heterogeneity have been accounted, the better classification ability. Potential applications of the modeling results from both analysis approaches are discussed.  相似文献   

4.
Modeling a response in terms of the factors that affect it is often required in quality applications. While the normal scenario is commonly assumed in such modeling efforts, leading to the application of linear regression analysis, there are cases when the assumptions underlying this scenario are not valid and alternative approaches need to be pursued, like the normalization of the data or generalized linear modeling. Recently, a new response modeling methodology (RMM) has been introduced, which seems to be a natural generalization of various current scientific and engineering mainstream models, where a monotone convex (concave) relationship between the response and the affecting factor (or a linear combination of factors) may be assumed. The purpose of this paper is to provide the quality practitioner with a survey of these models and demonstrate how they can be derived as special cases of the new RMM. A major implication of this survey is that RMM can be considered a valid approach for quality engineering modeling and, thus, may be conveniently applied where theory‐based models are not available or the goodness‐of‐fit of current empirically‐derived models is unsatisfactory. A numerical example demonstrates the application of the new RMM to software reliability‐growth modeling. The behavior of the new model when the systematic variation vanishes (there is only random variation) is also briefly explored. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

5.
In traffic safety studies, crash frequency modeling of total crashes is the cornerstone before proceeding to more detailed safety evaluation. The relationship between crash occurrence and factors such as traffic flow and roadway geometric characteristics has been extensively explored for a better understanding of crash mechanisms. In this study, a multi-level Bayesian framework has been developed in an effort to identify the crash contributing factors on an urban expressway in the Central Florida area. Two types of traffic data from the Automatic Vehicle Identification system, which are the processed data capped at speed limit and the unprocessed data retaining the original speed were incorporated in the analysis along with road geometric information. The model framework was proposed to account for the hierarchical data structure and the heterogeneity among the traffic and roadway geometric data. Multi-level and random parameters models were constructed and compared with the Negative Binomial model under the Bayesian inference framework. Results showed that the unprocessed traffic data was superior. Both multi-level models and random parameters models outperformed the Negative Binomial model and the models with random parameters achieved the best model fitting. The contributing factors identified imply that on the urban expressway lower speed and higher speed variation could significantly increase the crash likelihood. Other geometric factors were significant including auxiliary lanes and horizontal curvature.  相似文献   

6.
We provide a comprehensive overview of latent Markov (LM) models for the analysis of longitudinal categorical data. We illustrate the general version of the LM model which includes individual covariates, and several constrained versions. Constraints make the model more parsimonious and allow us to consider and test hypotheses of interest. These constraints may be put on the conditional distribution of the response variables given the latent process (measurement model) or on the distribution of the latent process (latent model). We also illustrate in detail maximum likelihood estimation through the Expectation–Maximization algorithm, which may be efficiently implemented by recursions taken from the hidden Markov literature. We outline methods for obtaining standard errors for the parameter estimates. We also illustrate methods for selecting the number of states and for path prediction. Finally, we mention issues related to Bayesian inference of LM models. Possibilities for further developments are given among the concluding remarks.  相似文献   

7.
The question of whether crash injury severity should be modeled using an ordinal response model or a non-ordered (multinomial) response model is persistent in traffic safety engineering. This paper proposes the use of the partial proportional odds (PPO) model as a statistical modeling technique that both bridges the gap between ordered and non-ordered response modeling, and avoids violating the key assumptions in the behavior of crash severity inherent in these two alternatives. The partial proportional odds model is a type of logistic regression that allows certain individual predictor variables to ignore the proportional odds assumption which normally forces predictor variables to affect each level of the response variable with the same magnitude, while other predictor variables retain this proportional odds assumption. This research looks at the effectiveness of this PPO technique in predicting vehicular crash severities on Connecticut state roads using data from 1995 to 2009. The PPO model is compared to ordinal and multinomial response models on the basis of adequacy of model fit, significance of covariates, and out-of-sample prediction accuracy. The results of this study show that the PPO model has adequate fit and performs best overall in terms of covariate significance and holdout prediction accuracy. Combined with the ability to accurately represent the theoretical process of crash injury severity prediction, this makes the PPO technique a favorable approach for crash injury severity modeling by adequately modeling and predicting the ordinal nature of the crash severity process and addressing the non-proportional contributions of some covariates.  相似文献   

8.
Reference Bayesian methods for recapture models with heterogeneity   总被引:1,自引:0,他引:1  
In the context of capture–recapture experiments heterogeneous capture probabilities are often perceived as one of the most challenging features to be incorporated in statistical models. In this paper we propose within a Bayesian framework a new modeling strategy for inference on the unknown population size in the presence of heterogeneity of subject characteristics. Our approach is attractive in that parameters are easily interpretable. Moreover, no parametric distributional assumptions are imposed on the latent distribution of individual heterogeneous propensities to be captured. Bayesian inference based on marginal likelihood by-passes some common identifiability issues, and a formal default prior distribution can be derived. Alternative default prior choices are considered and compared. Performance of our formal default approach is favorably evaluated with two real data sets and with a small simulation study.  相似文献   

9.
Severe crashes are causing serious social and economic loss, and because of this, reducing crash injury severity has become one of the key objectives of the high speed facilities’ (freeway and expressway) management. Traditional crash injury severity analysis utilized data mainly from crash reports concerning the crash occurrence information, drivers’ characteristics and roadway geometric related variables. In this study, real-time traffic and weather data were introduced to analyze the crash injury severity. The space mean speeds captured by the Automatic Vehicle Identification (AVI) system on the two roadways were used as explanatory variables in this study; and data from a mountainous freeway (I-70 in Colorado) and an urban expressway (State Road 408 in Orlando) have been used to identify the analysis result's consistence. Binary probit (BP) models were estimated to classify the non-severe (property damage only) crashes and severe (injury and fatality) crashes. Firstly, Bayesian BP models’ results were compared to the results from Maximum Likelihood Estimation BP models and it was concluded that Bayesian inference was superior with more significant variables. Then different levels of hierarchical Bayesian BP models were developed with random effects accounting for the unobserved heterogeneity at segment level and crash individual level, respectively. Modeling results from both studied locations demonstrate that large variations of speed prior to the crash occurrence would increase the likelihood of severe crash occurrence. Moreover, with considering unobserved heterogeneity in the Bayesian BP models, the model goodness-of-fit has improved substantially. Finally, possible future applications of the model results and the hierarchical Bayesian probit models were discussed.  相似文献   

10.
Freeway crash occurrences are highly influenced by geometric characteristics, traffic status, weather conditions and drivers’ behavior. For a mountainous freeway which suffers from adverse weather conditions, it is critical to incorporate real-time weather information and traffic data in the crash frequency study. In this paper, a Bayesian inference method was employed to model one year's crash data on I-70 in the state of Colorado. Real-time weather and traffic variables, along with geometric characteristics variables were evaluated in the models. Two scenarios were considered in this study, one seasonal and one crash type based case. For the methodology part, the Poisson model and two random effect models with a Bayesian inference method were employed and compared in this study. Deviance Information Criterion (DIC) was utilized as a comparison factor. The correlated random effect models outperformed the others. The results indicate that the weather condition variables, especially precipitation, play a key role in the crash occurrence models. The conclusions imply that different active traffic management strategies should be designed based on seasons, and single-vehicle crashes have different crash mechanism compared to multi-vehicle crashes.  相似文献   

11.
The Phase I applications of the statistical profile monitoring have recently been extended to the case when the response variable is binary. We are motivated to undertake the current research in an attempt to try to provide a unified framework for the Phase I control in the context of statistical profile monitoring that can be used to tackle a large class of response variables, such as continuous, count, or categorical response variables. The unified framework is essentially based on applying the change point model to the class of generalized linear models. The proposed Phase I control chart is assessed and compared with the existing charts under binomial and Poisson profiles. Some diagnostic procedures are also discussed. A real data set obtained from a test lab concerning the dispersion of carbon black filler in a rubber mix is used to demonstrate how the proposed chart can be used in practical applications. Some future research directions are also discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Experiments in materials science investigating cubic crystalline structures often collect data which are in truth equivalence classes of crystallographically symmetric orientations. These intend to represent how lattice structures of particles are orientated relative to a reference coordinate system. Motivated by a materials science application, we formulate parametric probability models for “unlabeled orientation data.” This amounts to developing models on equivalence classes of three-dimensional rotations. We use a flexible existing model class for random rotations (called uniform-axis-random-spin models) to induce probability distributions on the equivalence classes of rotations. We develop one-sample Bayesian inference for the parameters in these models, and compare this methodology to some likelihood-based approaches. We also contrast the new parametric analysis of unlabeled orientation data with other analyses that proceed as if the data have been preprocessed into honest orientation data. Supplementary materials for this article are available online.  相似文献   

13.
Traditional control charts for monitoring attribute data usually neglect the order among the attribute levels, such as good, marginal and bad, of a categorical factor. Such order may be reflected by an underlying continuous variable, which determines the level of the categorical factor by classifying it according to some thresholds in the latent continuous scale. This paper exploits this ordinal information and proposes a control chart for detecting location shifts in the latent variable based on merely the attribute level counts, regardless of the continuous values of the latent variable. The proposed ordinal chart is very simple to construct and enjoys the same setting as conventional categorical charts. Numerical simulations demonstrate the superiority of this simple ordinal categorical chart.  相似文献   

14.
In this paper a novel methodology for the prediction of the occurrence of road accidents is presented. The methodology utilizes a combination of three statistical methods: (1) gamma-updating of the occurrence rates of injury accidents and injured road users, (2) hierarchical multivariate Poisson-lognormal regression analysis taking into account correlations amongst multiple dependent model response variables and effects of discrete accident count data e.g. over-dispersion, and (3) Bayesian inference algorithms, which are applied by means of data mining techniques supported by Bayesian Probabilistic Networks in order to represent non-linearity between risk indicating and model response variables, as well as different types of uncertainties which might be present in the development of the specific models.  相似文献   

15.
Measuring risk when data are available only on an ordinal scale is not an easy task. The most common approach of risk modeling is a quantitative approach. In this paper, we propose the Criticality Index: a risk index suitable for studies where data are collected on ordinal scales, defined on the relative frequencies of the considered ordinal variables. Exact and asymptotic distributions of the index estimator are derived, and its statistical properties are studied. Moreover, the confidence intervals using the asymptotic normality are defined. The proposed index may be used as an initial view of the level of risk, for comparisons among environments, to indicate how risk changes over time, and to identify interventions in control systems. An application in quality control framework to data on severity, detection, and the occurrence of product defects of a multinational manufacturer is also presented.  相似文献   

16.
Uesaka  Hiroyuki 《Behaviormetrika》1986,13(19):87-101

An extension of a bivariate probit model is presented for bivariate polychotomous ordered categorical responses. A linear model is considered on the assumption that the observed categorical variables are manifestations of the latent continuous variables having bivariate normal distributions. Method of inference and analytical procedures are given on the basis of the maximum likelihood procedure. Various applications are discussed and illustrated with examples. The model can be applied to analysis of association, paired comparison, matched pair experiments, testing homogeneity of marginal distributions and symmetry of a square table, factorial analysis with bivariate ordered categorical responses, and so on.

  相似文献   

17.
The widely adopted techniques for regional crash modeling include the negative binomial model (NB) and Bayesian negative binomial model with conditional autoregressive prior (CAR). The outputs from both models consist of a set of fixed global parameter estimates. However, the impacts of predicting variables on crash counts might not be stationary over space. This study intended to quantitatively investigate this spatial heterogeneity in regional safety modeling using two advanced approaches, i.e., random parameter negative binomial model (RPNB) and semi-parametric geographically weighted Poisson regression model (S-GWPR).  相似文献   

18.
张乔微  李艳婷 《工业工程》2020,23(3):145-153
为了解决含顺序型和名义型变量混合型数据的监测问题,提出了一种基于LOF算法的多维混合型数据控制图(mixed-type data local outlier factor control chart, MLOF)。在监测过程变量变化的过程中,该控制图充分考虑了顺序型变量的等级特性和名义型变量的信息熵,基于数据的密度来衡量观测点的异常程度。分别使用基于信用卡申请数据集的仿真案例和基于德国信用卡数据集的实例,对比MLOF控制图和现有混合型数据控制图在异常点检测上的表现。仿真案例共模拟了30种监测场景。结果表明,在57%的场景中,MLOF控制图的综合表现都是最好的。而实例也验证了MLOF控制图更适用于数据量大、聚类情况复杂的混合型数据监测过程中。  相似文献   

19.
The univariate skew-normal distribution was introduced by Azzalini in 1985 as a natural extension of the classical normal density to accommodate asymmetry. He extensively studied the properties of this distribution and in conjunction with coauthors, extended this class to include the multivariate analog of the skew-normal. Arnold et al. (1993) introduced a more general skew-normal distribution as the marginal distribution of a truncated bivariate normal distribution in whichX was retained only ifY satisfied certain constraints. Using this approach more general univariate and multivariate skewed distributions have been developed. A survey of such models is provided together with discussion of related inference questions.  相似文献   

20.
In zone-level crash prediction, accounting for spatial dependence has become an extensively studied topic. This study proposes Support Vector Machine (SVM) model to address complex, large and multi-dimensional spatial data in crash prediction. Correlation-based Feature Selector (CFS) was applied to evaluate candidate factors possibly related to zonal crash frequency in handling high-dimension spatial data. To demonstrate the proposed approaches and to compare them with the Bayesian spatial model with conditional autoregressive prior (i.e., CAR), a dataset in Hillsborough county of Florida was employed. The results showed that SVM models accounting for spatial proximity outperform the non-spatial model in terms of model fitting and predictive performance, which indicates the reasonableness of considering cross-zonal spatial correlations. The best model predictive capability, relatively, is associated with the model considering proximity of the centroid distance by choosing the RBF kernel and setting the 10% of the whole dataset as the testing data, which further exhibits SVM models’ capacity for addressing comparatively complex spatial data in regional crash prediction modeling. Moreover, SVM models exhibit the better goodness-of-fit compared with CAR models when utilizing the whole dataset as the samples. A sensitivity analysis of the centroid-distance-based spatial SVM models was conducted to capture the impacts of explanatory variables on the mean predicted probabilities for crash occurrence. While the results conform to the coefficient estimation in the CAR models, which supports the employment of the SVM model as an alternative in regional safety modeling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号