首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 764 毫秒
1.
A new analytic statistical technique for predictive event modeling in ongoing multicenter clinical trials with waiting time to response is developed. It allows for the predictive mean and predictive bounds for the number of events to be constructed over time, accounting for the newly recruited patients and patients already at risk in the trial, and for different recruitment scenarios. For modeling patient recruitment, an advanced Poisson-gamma model is used, which accounts for the variation in recruitment over time, the variation in recruitment rates between different centers and the opening or closing of some centers in the future. A few models for event appearance allowing for 'recurrence', 'death' and 'lost-to-follow-up' events and using finite Markov chains in continuous time are considered. To predict the number of future events over time for an ongoing trial at some interim time, the parameters of the recruitment and event models are estimated using current data and then the predictive recruitment rates in each center are adjusted using individual data and Bayesian re-estimation. For a typical scenario (continue to recruit during some time interval, then stop recruitment and wait until a particular number of events happens), the closed-form expressions for the predictive mean and predictive bounds of the number of events at any future time point are derived under the assumptions of Markovian behavior of the event progression. The technique is efficiently applied to modeling different scenarios for some ongoing oncology trials. Case studies are considered.  相似文献   

2.
《统计学通讯:理论与方法》2012,41(13-14):2321-2341
For the case where at least two sets have an odd number of variables we do not have the exact distribution of the generalized Wilks Lambda statistic in a manageable form, adequate for manipulation. In this article, we develop a family of very accurate near-exact distributions for this statistic for the case where two or three sets have an odd number of variables. We first express the exact characteristic function of the logarithm of the statistic in the form of the characteristic function of an infinite mixture of Generalized Integer Gamma distributions. Then, based on truncations of this exact characteristic function, we obtain a family of near-exact distributions, which, by construction, match the first two exact moments. These near-exact distributions display an asymptotic behaviour for increasing number of variables involved. The corresponding cumulative distribution functions are obtained in a concise and manageable form, relatively easy to implement computationally, allowing for the computation of virtually exact quantiles. We undertake a comparative study for small sample sizes, using two proximity measures based on the Berry-Esseen bounds, to assess the performance of the near-exact distributions for different numbers of sets of variables and different numbers of variables in each set.  相似文献   

3.
Despite tremendous effort on different designs with cross-sectional data, little research has been conducted for sample size calculation and power analyses under repeated measures design. In addition to time-averaged difference, changes in mean response over time (CIMROT) is the primary interest in repeated measures analysis. We generalized sample size calculation and power analysis equations for CIMROT to allow unequal sample size between groups for both continuous and binary measures, through simulation, evaluated the performance of proposed methods, and compared our approach to that of a two-stage model formulization. We also created a software procedure to implement the proposed methods.  相似文献   

4.
In survey research, it is assumed that reported response by the individual is correct. However, given the issues of prestige bias, self-respect, respondent's reported data often produces estimated values which are highly deviated from the true values. This causes measurement error (ME) to be present in the sample estimates. In this article, the estimation of population mean in the presence of measurement error using information on a single auxiliary variable is studied. A generalized estimator of population mean is proposed. The class of estimators is obtained by using some conventional and non-conventional measures. Simulation and numerical study is also conducted to assess the performance of estimators in the presence and absence of measurement error.  相似文献   

5.
L-moments of residual life   总被引:1,自引:0,他引:1  
The mean, variance and coefficient of variation of residual life which are based on the usual moments of the residual life distribution are extensively used in reliability analysis. It has been established in various theoretical and empirical studies that the L-moments have some advantages over the usual moments in many situations. Accordingly in the present paper we study the properties of L-moments of residual life in the context of modeling lifetime data, characterizing life distributions and other applications. The role of certain quantile functions and quantile-based concepts in reliability analysis are also investigated.  相似文献   

6.
《统计学通讯:理论与方法》2012,41(16-17):3259-3277
Real data may expose a larger (or smaller) variability than assumed in an exponential family modeling, the basis of Generalized linear models and additive models. To analyze such data, smooth estimation of the mean and the dispersion function has been introduced in extended generalized additive models using P-splines techniques. This methodology is further explored here by allowing for the modeling of some of the covariates parametrically and some nonparametrically. The main contribution in this article is a simulation study investigating the finite-sample performance of the P-spline estimation technique in these extended models, including comparisons with a standard generalized additive modeling approach, as well as with a hierarchical modeling approach.  相似文献   

7.
ABSTRACT

For any continuous baseline G distribution, Cordeiro and Castro pioneered the Kumaraswamy-G family of distributions with two extra positive parameters, which generalizes both Lehmann types I and II classes. We study some mathematical properties of the Kumaraswamy-normal (KwN) distribution including ordinary and incomplete moments, mean deviations, quantile and generating functions, probability weighted moments, and two entropy measures. We propose a new linear regression model based on the KwN distribution, which extends the normal linear regression model. We obtain the maximum likelihood estimates of the model parameters and provide some diagnostic measures such as global influence, local influence, and residuals. We illustrate the potentiality of the introduced models by means of two applications to real datasets.  相似文献   

8.
Inequalities involving some sample means and order statistics are established. An upper bound of the absolute difference between the sample mean and median is also derived. Interesting inequalities among the sample mean and the median are obtained for cases when all the observations have the same sign. Some other algebraic inequalities are derived by taking expected values of the sample results and then applying them to some continuous distributions. It is also proved that the mean of a non-negative continuous random variable is at least as large as p times 100(1 ? p)th percentile.  相似文献   

9.
The problem of computing the variance of a sample of N data points {xi } may be difficult for certain data sets, particularly when N is large and the variance is small. We present a survey of possible algorithms and their round-off error bounds, including some new analysis for computations with shifted data. Experimental results confirm these bounds and illustrate the dangers of some algorithms. Specific recommendations are made as to which algorithm should be used in various contexts.  相似文献   

10.
We describe a method of calculating sharp lower and upper bounds on the expectations of arbitrary, properly centered L-statistics expressed in the Gini mean difference units of the original i.i.d. observations. Precise values of bounds are derived for the single-order statistics, their differences, and some examples of L-estimators. We also present the families of discrete distributions which attain the bounds, possibly in the limit.  相似文献   

11.
We establish the upper nonpositive and all the lower bounds on the expectations of generalized order statistics based on a given distribution function with the finite mean and central absolute moment of a fixed order. We also describe the distributions for which the bounds are attained. The methods of deriving the lower nonpositive (upper nonnegative) and lower nonnegative (upper nonpositive) bounds are totally different. The first one, the greatest convex minorant method is the combination of the Moriguti and well-known Hölder inequalities and the latter one is based on the maximization of some norm on the properly chosen convex set. The paper completes the results of Cramer et al. [Evaluations of expected generalized order statistics in various scale units. Appl Math. 2002;29:285–295].  相似文献   

12.
A method based on pseudo-observations has been proposed for direct regression modeling of functionals of interest with right-censored data, including the survival function, the restricted mean and the cumulative incidence function in competing risks. The models, once the pseudo-observations have been computed, can be fitted using standard generalized estimating equation software. Regression models can however yield problematic results if the number of covariates is large in relation to the number of events observed. Guidelines of events per variable are often used in practice. These rules of thumb for the number of events per variable have primarily been established based on simulation studies for the logistic regression model and Cox regression model. In this paper we conduct a simulation study to examine the small sample behavior of the pseudo-observation method to estimate risk differences and relative risks for right-censored data. We investigate how coverage probabilities and relative bias of the pseudo-observation estimator interact with sample size, number of variables and average number of events per variable.  相似文献   

13.
Abstract

The generalized variance is an important statistical indicator which appears in a number of statistical topics. It is a successful measure for multivariate data concentration. In this article, we established, in a closed form, the bias of the generalized variance maximum likelihood estimator of the Multinomial family. We also derived, with a complete proof, the uniformly minimum variance unbiased estimator (UMVU) for the generalized variance of this family. These results rely on explicit calculations, the completeness of the exponential family and the Lehmann–Scheffé theorem.  相似文献   

14.
Following the work of Chen and Bhattacharyya [Exact confidence bounds for an exponential parameter under hybrid censoring. Comm Statist Theory Methods. 1988;17:1857–1870], several results have been developed regarding the exact likelihood inference of exponential parameters based on different forms of censored samples. In this paper, the conditional maximum likelihood estimators (MLEs) of two exponential mean parameters are derived under joint generalized Type-I hybrid censoring on the two samples. The moment generating functions (MGFs) and the exact densities of the conditional MLEs are obtained, using which exact confidence intervals are then developed for the model parameters. We also derive the means, variances, and mean squared errors of these estimates. An efficient computational method is developed based on the joint MGF. Finally, an example is presented to illustrate the methods of inference developed here.  相似文献   

15.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

16.
Situations frequently arise in practice in which mean residual life (mrl) functions must be ordered. For example, in a clinical trial of three experiments, let e (1), e (2) and e (3) be the mrl functions, respectively, for the disease groups under the standard and experimental treatments, and for the disease-free group. The well-documented mrl functions e (1) and e (3) can be used to generate a better estimate for e (2) under the mrl restriction e (1) < or = e (2) < or = e (3). In this paper we propose nonparametric estimators of the mean residual life function where both upper and lower bounds are given. Small and large sample properties of the estimators are explored. Simulation study shows that the proposed estimators have uniformly smaller mean squared error compared to the unrestricted empirical mrl functions. The proposed estimators are illustrated using a real data set from a cancer clinical trial study.  相似文献   

17.
The use of robust measures helps to increase the precision of the estimators, especially for the estimation of extremely skewed distributions. In this article, a generalized ratio estimator is proposed by using some robust measures with single auxiliary variable under the adaptive cluster sampling (ACS) design. We have incorporated tri-mean (TM), mid-range (MR) and Hodges-Lehman (HL) of the auxiliary variable as robust measures together with some conventional measures. The expressions of bias and mean square error (MSE) of the proposed generalized ratio estimator are derived. Two types of numerical study have been conducted using artificial clustered population and real data application to examine the performance of the proposed estimator over the usual mean per unit estimator under simple random sampling (SRS). Related results of the simulation study show that the proposed estimators provide better estimation results on both real and artificial population over the competing estimators.  相似文献   

18.
We use bias-reduced estimators of high quantiles of heavy-tailed distributions, to introduce a new estimator for the mean in the case of infinite second moment. The asymptotic normality of the proposed estimator is established and checked in a simulation study, by four of the most popular goodness-of-fit tests. The accuracy of the resulting confidence intervals is evaluated as well. We also investigate the finite sample behavior and compare our estimator with some versions of Peng's estimator of the mean (namely those based on Hill, t-Hill and Huisman et al. extreme value index estimators). Moreover, we discuss the robustness of the tail index estimators used in this paper. Finally, our estimation procedure is applied to the well-known Danish fire insurance claims data set, to provide confidence bounds for the means of weekly and monthly maximum losses over a period of 10 years.  相似文献   

19.
The potential outcomes approach to causal inference postulates that each individual has a number of possibly latent outcomes, each of which would be observed under a different treatment. For any individual, some of these outcomes will be unobservable or counterfactual. Information about post-treatment characteristics sometimes allows statements about what would have happened if an individual or group with these characteristics had received a different treatment. These are statements about the realized effects of the treatment. Determining the likely effect of an intervention before making a decision involves inference about effects in populations defined only by characteristics observed before decisions about treatment are made. Information on realized effects can tighten bounds on these prospectively defined measures of the intervention effect. We derive formulae for the bounds and their sampling variances and illustrate these points with data from a hypothetical study of the efficacy of screening mammography.  相似文献   

20.
This note provides a new explanation for Tukey's definition of “(inner) fences” for box-and-whiskers plots. The starting point is explicit bounds for the sample mean based only on the box plot. Starting from these bounds we define a dataset to contain outside values if at least one of the latter bounds is outside of the box. This leads to a new, yet simple definition of fences. They are symmetric around the box if, and only if, the median is in the middle of the box. In that case, the new definition coincides with Tukey's rule of 1.5 times the inter quartile range. To avoid instabilities for small (sub-) samples we propose to complement the box-and-whiskers plot of the original data with a box-and-whiskers plot of Walsh means.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号