首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
When a two-level multilevel model (MLM) is used for repeated growth data, the individuals constitute level 2 and the successive measurements constitute level 1, which is nested within the individuals that make up level 2. The heterogeneity among individuals is represented by either the random-intercept or random-coefficient (slope) model. The variance components at level 1 involve serial effects and measurement errors under constant variance or heteroscedasticity. This study hypothesizes that missing serial effects or/and heteroscedasticity may bias the results obtained from two-level models. To illustrate this effect, we conducted two simulation studies, where the simulated data were based on the characteristics of an empirical mouse tumour data set. The results suggest that for repeated growth data with constant variance (measurement error) and misspecified serial effects (ρ > 0.3), the proportion of level-2 variation (intra-class correlation coefficient) increases with ρ and the two-level random-coefficient model is the minimum AIC (or AICc) model when compared with the fixed model, heteroscedasticity model, and random-intercept model. In addition, the serial effect (ρ > 0.1) and heteroscedasticity are both misspecified, implying that the two-level random-coefficient model is the minimum AIC (or AICc) model when compared with the fixed model and random-intercept model. This study demonstrates that missing serial effects and/or heteroscedasticity may indicate heterogeneity among individuals in repeated growth data (mixed or two-level MLM). This issue is critical in biomedical research.  相似文献   

2.
Incomplete growth curve data often result from missing or mistimed observations in a repeated measures design. Virtually all methods of analysis rely on the dispersion matrix estimates. A Monte Carlo simulation was used to compare three methods of estimation of dispersion matrices for incomplete growth curve data. The three methods were: 1) maximum likelihood estimation with a smoothing algorithm, which finds the closest positive semidefinite estimate of the pairwise estimated dispersion matrix; 2) a mixed effects model using the EM (estimation maximization) algorithm; and 3) a mixed effects model with the scoring algorithm. The simulation included 5 dispersion structures, 20 or 40 subjects with 4 or 8 observations per subject and 10 or 30% missing data. In all the simulations, the smoothing algorithm was the poorest estimator of the dispersion matrix. In most cases, there were no significant differences between the scoring and EM algorithms. The EM algorithm tended to be better than the scoring algorithm when the variances of the random effects were close to zero, especially for the simulations with 4 observations per subject and two random effects.  相似文献   

3.
介绍了潜在成长模型的发展、类型和内容,通过分析、归纳、整理等,比较了潜在成长模型与传统t检验、方差分析、回归分析的差别,探讨了潜在成长模型研究设计的要求,通过台湾棒球职业球员薪酬成长模型说明。利用文献调查法,分析了LGM潜在成长模型在体育运动领域中应用的概况,举例说明体育运动领域应用LGM的情况,证实了未来潜在成长模型会在体育科学领域逐渐受到重视并得到快速发展。  相似文献   

4.
In the analysis of survival data, when nonproportional hazards are encountered, the Cox model is often extended to allow for a time-dependent effect by accommodating a varying coefficient. This extension, however, cannot resolve the nonproportionality caused by heterogeneity. In contrast, the heteroscedastic hazards regression (HHR) model is capable of modeling heterogeneity and thus can be applied when dealing with nonproportional hazards. In this paper, we study the application of the HHR model possibly equipped with varying coefficients. An LRR (logarithm of relative risk) plot is suggested when investigating the need to impose varying coefficients. Constancy and degeneration in the plot are used as diagnostic criteria. For the HHR model, a ‘piecewise effect’ (PE) analysis and an ‘average effect’ (AE) analysis are introduced. For the PE setting, we propose a score-type test for covariate-specific varying coefficients. The Stanford Heart Transplant data are analyzed for illustration. In the case of degeneration being destroyed by a polynomial covariate, piecewise constancy and/or monotonicity of the LRRs is considered as an alternative criterion based on the PE analysis. Finally, under the framework of the varying-coefficient HHR model, the meanings of the PE and AE analyses, along with their dynamic interpretation, are discussed.  相似文献   

5.
Computer modeling is having a profound effect on scientific research, replacing direct physical experimentation by computer simulation of complex models. In this research, the computer output, X(t), is assumed to be a multivariate, three-dimensional (time) Ornstein-Uhlenbeck (O–U) process with parametric covariance function. It is shown that the ML estimates of the parameters are strongly consistent and asymptotically normal when the observations are taken from a complete lattice, not necessarily equally spaced.  相似文献   

6.
Using simulation techniques, the null distribution properties of seven hypothesis testing procedures and a comparison of their powers are investigated for incomplete-data small-sample growth curve situations. The testing procedures are a combination of two growth curve models (the Potthoff and Roy model for complete data and Kleinbaum's extention to incomplete data) and three estimation techniques (two involving means of existing observations and the other using the EM algorithm) plus an analysis of a subset of complete data. All of the seven tests use the Kleinbaum Wald statistic, but different tests use different information. The hypotheses of identical and parallel growth curves are tested under the assumptions of multivariate normality and a linear polynomial mean growth curve for each of two groups. Good approximate null distributions are found for all procedures and one procedure is identified as empirically most powerful for the situations investigated.  相似文献   

7.
Motivated by time series of atmospheric concentrations of certain pollutants the authors develop bent‐cable regression for autocorrelated errors. Bent‐cable regression extends the popular piecewise linear (broken‐stick) model, allowing for a smooth change region of any non‐negative width. Here the authors consider autoregressive noise added to a bent‐cable mean structure, with unknown regression and time series parameters. They develop asymptotic theory for conditional least‐squares estimation in a triangular array framework, wherein each segment of the bent cable contains an increasing number of observations while the autoregressive order remains constant as the sample size grows. They explore the theory in a simulation study, develop implementation details, apply the methodology to the motivating pollutant dataset, and provide a scientific interpretation of the bent‐cable change point not discussed previously. The Canadian Journal of Statistics 38: 386–407; 2010 © 2010 Statistical Society of Canada  相似文献   

8.
The Modulated Power Law process has been recently proposed as a suitable model for describing the failure pattern of repairable systems when both renewal-type behaviour and time trend are present. Unfortunately, the maximum likelihood method provides neither accurate confidence intervals on the model parameters for small or moderate sample sizes nor predictive intervals on future observations.

This paper proposes a Bayes approach, based on both non-informative and vague prior, as an alternative to the classical method. Point and interval estimation of the parameters, as well as point and interval prediction of future failure times, are given. Monte Carlo simulation studies show that the Bayes estimation and prediction possess good statistical properties in a frequentist context and, thus, are a valid alternative to the maximum likelihood approach.

Numerical examples illustrate the estimation and prediction procedures.  相似文献   

9.
In this paper, we use a particular piecewise deterministic Markov process (PDMP) to model the evolution of a degradation mechanism that may arise in various structural components, namely, the fatigue crack growth. We first derive some probability results on the stochastic dynamics with the help of Markov renewal theory: a closed-form solution for the transition function of the PDMP is given. Then, we investigate some methods to estimate the parameters of the dynamical system, involving Bogolyubov's averaging principle and maximum likelihood estimation for the infinitesimal generator of the underlying jump Markov process. Numerical applications on a real crack data set are given.  相似文献   

10.
The hazard function plays an important role in reliability or survival studies since it describes the instantaneous risk of failure of items at a time point, given that they have not failed before. In some real life applications, abrupt changes in the hazard function are observed due to overhauls, major operations or specific maintenance activities. In such situations it is of interest to detect the location where such a change occurs and estimate the size of the change. In this paper we consider the problem of estimating a single change point in a piecewise constant hazard function when the observed variables are subject to random censoring. We suggest an estimation procedure that is based on certain structural properties and on least squares ideas. A simulation study is carried out to compare the performance of this estimator with two estimators available in the literature: an estimator based on a functional of the Nelson-Aalen estimator and a maximum likelihood estimator. The proposed least squares estimator tums out to be less biased than the other two estimators, but has a larger variance. We illustrate the estimation method on some real data sets.  相似文献   

11.
《统计学通讯:理论与方法》2012,41(16-17):3259-3277
Real data may expose a larger (or smaller) variability than assumed in an exponential family modeling, the basis of Generalized linear models and additive models. To analyze such data, smooth estimation of the mean and the dispersion function has been introduced in extended generalized additive models using P-splines techniques. This methodology is further explored here by allowing for the modeling of some of the covariates parametrically and some nonparametrically. The main contribution in this article is a simulation study investigating the finite-sample performance of the P-spline estimation technique in these extended models, including comparisons with a standard generalized additive modeling approach, as well as with a hierarchical modeling approach.  相似文献   

12.
In this paper, we study the maximum likelihood estimation of a model with mixed binary responses and censored observations. The model is very general and includes the Tobit model and the binary choice model as special cases. We show that, by using additional binary choice observations, our method is more efficient than the traditional Tobit model. Two iterative procedures are proposed to compute the maximum likelihood estimator (MLE) for the model based on the EM algorithm (Dempster et al, 1977) and the Newton-Raphson method. The uniqueness of the MLE is proved. The simulation results show that the inconsistency and inefficiency can be significant when the Tobit method is applied to the present mixed model. The experiment results also suggest that the EM algorithm is much faster than the Newton-Raphson method for the present mixed model. The method also allows one to combine two data sets, the smaller data set with more detailed observations and the larger data set with less detailed binary choice observations in order to improve the efficiency of estimation. This may entail substantial savings when one conducts surveys.  相似文献   

13.
Multivariate Logit models are convenient to describe multivariate correlated binary choices as they provide closed-form likelihood functions. However, the computation time required for calculating choice probabilities increases exponentially with the number of choices, which makes maximum likelihood-based estimation infeasible when many choices are considered. To solve this, we propose three novel estimation methods: (i) stratified importance sampling, (ii) composite conditional likelihood (CCL), and (iii) generalized method of moments, which yield consistent estimates and still have similar small-sample bias to maximum likelihood. Our simulation study shows that computation times for CCL are much smaller and that its efficiency loss is small.  相似文献   

14.
Regression analyses are commonly performed with doubly limited continuous dependent variables; for instance, when modeling the behavior of rates, proportions and income concentration indices. Several models are available in the literature for use with such variables, one of them being the unit gamma regression model. In all such models, parameter estimation is typically performed using the maximum likelihood method and testing inferences on the model''s parameters are usually based on the likelihood ratio test. Such a test can, however, deliver quite imprecise inferences when the sample size is small. In this paper, we propose two modified likelihood ratio test statistics for use with the unit gamma regressions that deliver much more accurate inferences when the number of data points in small. Numerical (i.e. simulation) evidence is presented for both fixed dispersion and varying dispersion models, and also for tests that involve nonnested models. We also present and discuss two empirical applications.  相似文献   

15.
Abstract. We consider a stochastic process driven by diffusions and jumps. Given a discrete record of observations, we devise a technique for identifying the times when jumps larger than a suitably defined threshold occurred. This allows us to determine a consistent non‐parametric estimator of the integrated volatility when the infinite activity jump component is Lévy. Jump size estimation and central limit results are proved in the case of finite activity jumps. Some simulations illustrate the applicability of the methodology in finite samples and its superiority on the multipower variations especially when it is not possible to use high frequency data.  相似文献   

16.
Varying-coefficient models are useful extensions of classical linear models. They arise from multivariate nonparametric regression, nonlinear time series modeling and forecasting, longitudinal data analysis, and others. This article proposes the penalized spline estimation for the varying-coefficient models. Assuming a fixed but potentially large number of knots, the penalized spline estimators are shown to be strong consistency and asymptotic normality. A systematic optimization algorithm for the selection of multiple smoothing parameters is developed. One of the advantages of the penalized spline estimation is that it can accommodate varying degrees of smoothness among coefficient functions due to multiple smoothing parameters being used. Some simulation studies are presented to illustrate the proposed methods.  相似文献   

17.
In this paper, we construct a new mixture of geometric INAR(1) process for modeling over-dispersed count time series data, in particular data consisting of large number of zeros and ones. For some real data sets, the existing INAR(1) processes do not fit well, e.g., the geometric INAR(1) process overestimates the number of zero observations and underestimates the one observations, whereas Poisson INAR(1) process underestimates the zero observations and overestimates the one observations. Furthermore, for heavy tails, the PINAR(1) process performs poorly in the tail part. The existing zero-inflated Poisson INAR(1) and compound Poisson INAR(1) processes have the same kind of limitations. In order to remove this problem of under-fitting at one point and over-fitting at others points, we add some extra probability at one in the geometric INAR(1) process and build a new mixture of geometric INAR(1) process. Surprisingly, for some real data sets, it removes the problem of under and over-fitting over all the observations up to a significant extent. We then study the stationarity and ergodicity of the proposed process. Different methods of parameter estimation, namely the Yule-Walker and the quasi-maximum likelihood estimation procedures are discussed and illustrated using some simulation experiments. Furthermore, we discuss the future prediction along with some different forecasting accuracy measures. Two real data sets are analyzed to illustrate the effective use of the proposed model.  相似文献   

18.
A two-step generalized least-squares (GLS) estimator proposed by Zellner for seemingly unrelated regression (SUR) models is implementable when the estimated covariance matrix of the errors in the SUR system is non-singular. Violating the premise of non-singularity is a common problem among many applications in economics, business and management. We present methods of resolving this problem and propose an efficient procedure. The simulation study shows that the estimator of Haff performs better for small-sized observations, whereas the estimator of Ullah and Racine performs better for larger sized observations. Furthermore, the Ullah-Racine estimate is simple to calculate and easy to use. The empirical analysis involves the study of the diffusion processes of videocassette recorders across different geographic regions in the US, which exhibits a singular covariance matrix. The empirical results show that the procedures efficiently deal with the problem and provide plausible estimation results.  相似文献   

19.
Many tree algorithms have been developed for regression problems. Although they are regarded as good algorithms, most of them suffer from loss of prediction accuracy when there are many irrelevant variables and the number of predictors exceeds the number of observations. We propose the multistep regression tree with adaptive variable selection to handle this problem. The variable selection step and the fitting step comprise the multistep method.

The multistep generalized unbiased interaction detection and estimation (GUIDE) with adaptive forward selection (fg) algorithm, as a variable selection tool, performs better than some of the well-known variable selection algorithms such as efficacy adaptive regression tube hunting (EARTH), FSR (false selection rate), LSCV (least squares cross-validation), and LASSO (least absolute shrinkage and selection operator) for the regression problem. The results based on simulation study show that fg outperforms other algorithms in terms of selection result and computation time. It generally selects the important variables correctly with relatively few irrelevant variables, which gives good prediction accuracy with less computation time.  相似文献   

20.
In this note, we consider estimating the bivariate survival function when both survival times are subject to random left truncation and one of the survival times is subject to random right censoring. Motivated by Satten and Datta [2001. The Kaplan–Meier estimator as an inverse-probability-of-censoring weighted average. Amer. Statist. 55, 207–210], we propose an inverse-probability-weighted (IPW) estimator. It involves simultaneous estimation of the bivariate survival function of the truncation variables and that of the censoring variable and the truncation variable of the uncensored components. We prove that (i) when there is no censoring, the IPW estimator reduces to NPMLE of van der Laan [1996a. Nonparametric estimation of the bivariate survival function with truncated data. J. Multivariate Anal. 58, 107–131] and (ii) when there is random left truncation and right censoring on only one of the components and the other component is always observed, the IPW estimator reduces to the estimator of Gijbels and Gürler [1998. Covariance function of a bivariate distribution function estimator for left truncated and right censored data. Statist. Sin. 1219–1232]. Based on Theorem 3.1 of van der Laan [1996a. Nonparametric estimation of the bivariate survival function with truncated data. J. Multivariate Anal. 58, 107–131, 1996b. Efficient estimation of the bivariate censoring model and repairing NPMLE. Ann. Statist. 24, 596–627], we prove that the IPW estimator is consistent under certain conditions. Finally, we examine the finite sample performance of the IPW estimator in some simulation studies. For the special case that censoring time is independent of truncation time, a simulation study is conducted to compare the performances of the IPW estimator against that of the estimator proposed by van der Laan [1996a. Nonparametric estimation of the bivariate survival function with truncated data. J. Multivariate Anal. 58, 107–131, 1996b. Efficient estimation of the bivariate censoring model and repairing NPMLE. Ann. Statist. 24, 596–627]. For the special case (i), a simulation study is conducted to compare the performances of the IPW estimator against that of the estimator proposed by Huang et al. (2001. Nonnparametric estimation of marginal distributions under bivariate truncation with application to testing for age-of-onset application. Statist. Sin. 11, 1047–1068).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号