首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An elementary method of proof of the mode, median, and mean inequality is given for skewed, unimodal distributions of continuous random variables. A proof of the inequality for the gamma, F, and beta random variables is sketched.  相似文献   

2.
Formulas for the moments of the better known probability distribution functions are available in the literature on the subject. Persons wishing to derive these formulas, however, may find standard methods to be quite laborious. For discrete probability functions, surprisingly compact and elegant derivations may be obtained by using finite difference operators. Examples of this approach are presented.  相似文献   

3.
In the conventional concept, the variance of a tolerance interval from the measurements is a single component, and the sample size for quality control process was estimated by the variance of a single component. However, we can find examples in recent about several components that could vary in their measurements, so an approximate method must be found to modify the conventional tolerance interval. In our paper, we develop an approach to calculate the sample size for a two-sided tolerance interval including several components in the variance from the measurements. An example is presented to illustrate our proposed method.  相似文献   

4.
Tolerance limits are those limits that contain a certain proportion of the distribution of a characteristic with a given probability. 'They are used to make sure that the production will not be outside of specifications' (Amin & Lee, 1999). Usually, tolerance limits are constructed at the beginning of the monitoring of the process. Since they are calculated just one time, these tolerance limits cannot reflect changes of tolerance level over the lifetime of the process. This research proposes an algorithm to construct tolerance limits continuously over time for any given distribution. This algorithm makes use of the exponentially weighted moving average (EWMA) technique. It can be observed that the sample size required by this method is reduced over time.  相似文献   

5.
Sample Size     
Conventionally, sample size calculations are viewed as calculations determining the right number of subjects needed for a study. Such calculations follow the classical paradigm: “for a difference X, I need sample size Y.” We argue that the paradigm “for a sample size Y, I get information Z” is more appropriate for many studies and reflects the information needed by scientists when planning a study. This approach applies to both physiological studies and Phase I and II interventional studies. We provide actual examples from our own consulting work to demonstrate this. We conclude that sample size should be viewed not as a unique right number, but rather as a factor needed to assess the utility of a study.  相似文献   

6.
This article considers the different methods for determining sample sizes for Wald, likelihood ratio, and score tests for logistic regression. We review some recent methods, report the results of a simulation study comparing each of the methods for each of the three types of test, and provide Mathematica code for calculating sample size. We consider a variety of covariate distributions, and find that a calculation method based on a first order expansion of the likelihood ratio test statistic performs consistently well in achieving a target level of power for each of the three types of test.  相似文献   

7.
Model selection procedures often depend explicitly on the sample size n of the experiment. One example is the Bayesian information criterion (BIC) criterion and another is the use of Zellner–Siow priors in Bayesian model selection. Sample size is well-defined if one has i.i.d real observations, but is not well-defined for vector observations or in non-i.i.d. settings; extensions of critera such as BIC to such settings thus requires a definition of effective sample size that applies also in such cases. A definition of effective sample size that applies to fairly general linear models is proposed and illustrated in a variety of situations. The definition is also used to propose a suitable ‘scale’ for default proper priors for Bayesian model selection.  相似文献   

8.
A method is proposed for the sample size calculation in the case of therapeutic equivalence of two pharmaceuticals, when the decision is based on post-treatment differences and the post-treatment values are dependent on the pretreatment ones. When the correlation coefficient is large (over 0.7), it is shown that sample size calculation (and the corresponding hypothesis test) based on the sample statistic formed by the mean difference of the post–pre differences of each group has smaller variance and hence leads to smaller sample sizes.  相似文献   

9.
Formulas that yield minimum sample size for standard T tests are presented. Although the results are approximations, they usually yield the exact solution. Involving only standard normal quantiles, they could be used in an elementary course.  相似文献   

10.
Approximate chi-square tests for hypotheses concerning multinomial probabilities are considered in many textbooks. In this article power calculations and sample size based on power are discussed and illustrated for the three most frequently used tests of this type. Available noncentrality parameters and existing tables permit a relatively easy solution of these kinds of problems.  相似文献   

11.
12.
In testing product reliability, there is often a critical cutoff level that determines whether a specimen is classified as failed. One consequence is that the number of degradation data collected varies from specimen to specimen. The information of random sample size should be included in the model, and our study shows that it can be influential in estimating model parameters. Two-stage least squares (LS) and maximum modified likelihood (MML) estimation, which both assume fixed sample sizes, are commonly used for estimating parameters in the repeated measurements models typically applied to degradation data. However, the LS estimate is not consistent in the case of random sample sizes. This article derives the likelihood for the random sample size model and suggests using maximum likelihood (ML) for parameter estimation. Our simulation studies show that ML estimates have smaller biases and variances compared to the LS and MML estimates. All estimation methods can be greatly improved if the number of specimens increases from 5 to 10. A data set from a semiconductor application is used to illustrate our methods.  相似文献   

13.
A study on the robustness of the adaptation of the sample size for a phase III trial on the basis of existing phase II data is presented—when phase III is lower than phase II effect size. A criterion of clinical relevance for phase II results is applied in order to launch phase III, where data from phase II cannot be included in statistical analysis. The adaptation consists in adopting the conservative approach to sample size estimation, which takes into account the variability of phase II data. Some conservative sample size estimation strategies, Bayesian and frequentist, are compared with the calibrated optimal γ conservative strategy (viz. COS) which is the best performer when phase II and phase III effect sizes are equal. The Overall Power (OP) of these strategies and the mean square error (MSE) of their sample size estimators are computed under different scenarios, in the presence of the structural bias due to lower phase III effect size, for evaluating the robustness of the strategies. When the structural bias is quite small (i.e., the ratio of phase III to phase II effect size is greater than 0.8), and when some operating conditions for applying sample size estimation hold, COS can still provide acceptable results for planning phase III trials, even if in bias absence the OP was higher.

Main results concern the introduction of a correction, which affects just sample size estimates and not launch probabilities, for balancing the structural bias. In particular, the correction is based on a postulation of the structural bias; hence, it is more intuitive and easier to use than those based on the modification of Type I or/and Type II errors. A comparison of corrected conservative sample size estimation strategies is performed in the presence of a quite small bias. When the postulated correction is right, COS provides good OP and the lowest MSE. Moreover, the OPs of COS are even higher than those observed without bias, thanks to higher launch probability and a similar estimation performance. The structural bias can therefore be exploited for improving sample size estimation performances. When the postulated correction is smaller than necessary, COS is still the best performer, and it also works well. A higher than necessary correction should be avoided.  相似文献   

14.
15.
Abstract

Sample size calculation is an important component in designing an experiment or a survey. In a wide variety of fields—including management science, insurance, and biological and medical science—truncated normal distributions are encountered in many applications. However, the sample size required for the left-truncated normal distribution has not been investigated, because the distribution of the sample mean from the left-truncated normal distribution is complex and difficult to obtain. This paper compares an ad hoc approach to two newly proposed methods based on the Central Limit Theorem and on a high degree saddlepoint approximation for calculating the required sample size with the prespecified power. As shown by use of simulations and an example of health insurance cost in China, the ad hoc approach underestimates the sample size required to achieve prespecified power. The method based on the high degree saddlepoint approximation provides valid sample size and power calculations, and it performs better than the Central Limit Theorem. When the sample size is not too small, the Central Limit Theorem also provides a valid, but relatively simple tool to approximate that sample size.  相似文献   

16.
The simplest approximate confidence interval for the binomial parameter p, based on x successes in n trials, is

where c is a suitable percentile of the normal distribution. Because I 0 is so useful in introductory teaching and for back-of-the-envelope calculation, it is desirable to have guidelines for deciding when it provides a good answer. (It is clearly unwise to use I 0 when x is too near 0 or n.) This article proposes such guidelines, based on the criterion that I 0 should differ from the exact Clopper-Pearson confidence interval by an amount that is small compared to the length of the interval.  相似文献   

17.
18.
Rank tests, such as logrank or Wilcoxon rank sum tests, have been popularly used to compare survival distributions of two or more groups in the presence of right censoring. However, there has been little research on sample size calculation methods for rank tests to compare more than two groups. An existing method is based on a crude approximation, which tends to underestimate sample size, i.e., the calculated sample size has lower power than projected. In this paper we propose an asymptotically correct method and an approximate method for sample size calculation. The proposed methods are compared to other methods through simulation studies.  相似文献   

19.
The Bartlett's test (1937) for equality of variances is based on the χ2 distribution approximation. This approximation deteriorates either when the sample size is small (particularly < 4) or when the population number is large. According to a simulation investigation, we find a similar varying trend for the mean differences between empirical distributions of Bartlett's statistics and their χ2 approximations. By using the mean differences to represent the distribution departures, a simple adjustment approach on the Bartlett's statistic is proposed on the basis of equal mean principle. The performance before and after adjustment is extensively investigated under equal and unequal sample sizes, with number of populations varying from 3 to 100. Compared with the traditional Bartlett's statistic, the adjusted statistic is distributed more closely to χ2 distribution, for homogeneity samples from normal populations. The type I error is well controlled and the power is a little higher after adjustment. In conclusion, the adjustment has good control on the type I error and higher power, and thus is recommended for small samples and large population number when underlying distribution is normal.  相似文献   

20.
In this study, we propose a group sequential procedure that allows the change of necessary sample size at intermediary stage in sequential test. In the procedure, we formulate the conditional power to judge the necessity of the change of sample size in decision rules. Furthermore, we present an integral formula of the power of the test and show how to change the necessary sample size by using the power of the test. In simulation studies, we investigate the characteristics of the change of sample size and the pattern of decision across all stages based on generated normal random numbers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号