首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A fully sequential procedure is proposed for comparing K > or =3 treatments with immediate binary responses. The procedure uses an adaptive urn design to randomize patients to the treatments and stopping rules are incorporated for eliminating less promising treatments. Simulation is used to assess the performance of the procedure for several adaptive urn designs, in terms of expected numbers of treatment failures and allocation proportions, and the effect on estimation at the end of the trial is also addressed. It is concluded that the drop-the-loser rule is more effective than equal allocation and all of the other designs considered. The practical benefits of the procedure are illustrated using the results of a three-treatment lung cancer study. It is then shown how the sequential elimination procedure may be used in dose-finding studies and its performance is compared with a recently proposed method. Several possible extensions to the work are briefly indicated.  相似文献   

2.
The use of both sequential designs and adaptive treatment allocation are effective in reducing the number of patients receiving an inferior treatment in a clinical trial. In large samples, when the asymptotic normality of test statistics can be utilized, a standard sequential design can be combined with adaptive allocation. In small samples the planned error rate constraints may not be satisfied if normality is assumed. We address this problem by constructing sequential stopping rules with specified properties by consideration of the exact distribution of test statistics under a particular adaptive allocation scheme, the randomized play-the-winner rule. Using this approach, compared to traditional equal allocation trials, trials with adaptive allocation are shown to require a larger total sample size to achieve a given power. More interestingly, the expected number patients allocated to the inferior treatment may also be larger for the adaptive allocation designs depending on the true success rates.  相似文献   

3.
In clinical trials to compare two or more treatments with dichotomous responses, group-sequential designs may reduce the total number of patients involved in the trial and response-adaptive designs may result in fewer patients being assigned to the inferior treatments. In this paper, we combine group-sequential and response-adaptive designs, extending recent work on sample size re-estimation in trials to compare two treatments with normally distributed responses, to analogous binary response trials. We consider the use of two parameters of interest in the group-sequential design, the log odds ratio and the simple difference between the probabilities of success. In terms of the adaptive sampling rules, we study two urn models, the drop-the-loser rule and the randomized Pólya urn rule, and compare their properties with those of two sequential maximum likelihood estimation rules, which minimize the expected number of treatment failures. We investigate two ways in which adaptive urn designs can be used in conjunction with group-sequential designs. The first method updates the urn at each interim analysis and the second method continually updates the urn after each patient response, assuming immediate patient responses. Our simulation results show that the group-sequential design, which uses the drop-the-loser rule, applied fully sequentially, is the most effective method for reducing the expected number of treatment failures and the average sample number, whilst still maintaining the nominal error rates, over a range of success probabilities.  相似文献   

4.
We consider a clinical trial model comparing an experimental treatment with a control treatment when the responses are binary. For fixed significance level and power, we compare the expected number of treatment failures for two designs--the randomized play-the-winner rule and the triangular test. The former is an example of an adaptive design while the latter is an example of a fully sequential design. We show how to determine the sample size for the randomized play-the-winner rule and how to choose the stopping boundaries for the triangular test so that the two designs have similar power functions. With this choice of design parameters, simulation indicates that the triangular test is generally more effective at reducing the expected number of treatment failures, particularly when there is a large difference between the two probabilities of success. The expected number of treatment failures can be further reduced if the triangular test is applied using the randomized play-the-winner rule to assign each patient to one of the two treatments.  相似文献   

5.
In the present work, we develop a two-stage allocation rule for binary response using the log-odds ratio within the Bayesian framework allowing the current allocation to depend on the covariate value of the current subject. We study, both numerically and theoretically, several exact and limiting properties of this design. The applicability of the proposed methodology is illustrated by using some data set. We compare this rule with some of the existing rules by computing various performance measures.  相似文献   

6.
The paper assesses biased‐coin designs for sequential treatment allocation in clinical trials. Comparisons emphasise the importance of considering randomness, as well as treatment balance, which are calculated as bias and loss. In the numerical examples, the responses are assumed normally distributed, perhaps after transformation, and balance is required over a set of covariates. The effect of covariate distribution on the properties of five allocation rules is investigated, with an emphasis on methods of comparison, which also apply to other forms of response. The concept of admissibility shows that the widely used minimisation rule is outperformed by Atkinson's rule derived from the theory of optimum experimental design. We present a simplified form of this rule. For this rule, the ability to guess the next treatment allocation decreases with study size. For the other rules, it is constant. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Response adaptive designs are used in phase III clinical trial for skewing the allocation pattern towards the better treatments. We use optimum design theory to derive adaptive designs when the responses are normally distributed. The performance of the designs is studied with respect to the loss and the proportion of allocation to different treatments. The adaptive design does not affect inference.  相似文献   

8.
The goals of phase II dose-response studies are to prove that the treatment is effective and to choose the dose for further development. Randomized designs with equal allocation to either a high dose and placebo or to each of several doses and placebo are typically used. However, in trials where response is observed relatively quickly, adaptive designs might offer an advantage over equal allocation. We propose an adaptive design for dose-response trials that concentrates the allocation of subjects in one or more areas of interest, for example, near a minimum clinically important effect level, or near some maximal effect level, and also allows for the possibility to stop the trial early if needed. The proposed adaptive design yields higher power to detect a dose-response relationship, higher power in comparison with placebo, and selects the correct dose more frequently compared with a corresponding randomized design with equal allocation to doses.  相似文献   

9.
In comparing two treatments under a typical sequential clinical trial setting, a 50-50 randomization design generates reliable data for making efficient inferences about the treatment difference for the benefit of patients in the general population. However, if the treatment difference is large and the endpoint of the study is potentially fatal, it does not seem appropriate to sacrifice a large number of study patients who are assigned to the inferior arm. An adaptive design is a data-dependent treatment allocation rule that sequentially uses accumulating information about the treatment difference during the trial to modify the allocation rule for new study patients. In this article, we utilize real trials from AIDS and cancer to illustrate the advantage of using adaptive designs. Specifically we show that, with adaptive designs, the loss of power for testing the equality of two treatments is negligible. Moreover, the study patients do not have to pay a handsome price for the benefit of future patients. We also propose multi-stage adaptive rules to relax the administrative burden of implementing the study and to handle continuous response variables, such as the failure time in survival analysis.  相似文献   

10.
The objective of this paper is to develop statistical methodology for non-inferiority hypotheses to censored, exponentially distributed time to event endpoints. Motivated by a recent clinical trial in depression, we consider a gold standard design where a test group is compared with an active reference and with a placebo group. The test problem is formulated in terms of a retention of effect hypothesis. Thus, the proposed Wald-type test procedure assures that the effect of the test group is better than a pre-specified proportion Delta of the treatment effect of the reference group compared with the placebo group. A sample size allocation rule to achieve optimal power is presented, which only depends on the pre-specified Delta and the probabilities for the occurrence of censoring. In addition, a pretest is presented for either the reference or the test group to ensure assay sensitivity in the complete test procedure. The actual type I error and the sample size formula of the proposed tests are explored asymptotically by means of a simulation study showing good small sample characteristics. To illustrate the procedure a randomized, double blind clinical trial in depression is evaluated. An R-package for implementation of the proposed tests and for sample size determination accompanies this paper on the author's web page.  相似文献   

11.
Non‐response is a problem for most surveys. In the sample design, non‐response is often dealt with by setting a target response rate and inflating the sample size so that the desired number of interviews is reached. The decision to stop data collection is based largely on meeting the target response rate. A recent article by Rao, Glickman, and Glynn (RGG) suggests rules for stopping that are based on the survey data collected for the current set of respondents. Two of their rules compare estimates from fully imputed data where the imputations are based on a subset of early responders to fully imputed data where the imputations are based on the combined set of early and late responders. If these two estimates are different, then late responders are changing the estimate of interest. The present article develops a new rule for when to stop collecting data in a sample survey. The rule attempts to use complete interview data as well as covariates available on non‐responders to determine when the probability that collecting additional data will change the survey estimate is sufficiently low to justify stopping data collection. The rule is compared with that of RGG using simulations and then is implemented using data from a real survey. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
A three‐arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non‐inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three‐arm trials with negative binomially distributed endpoints. In particular, we develop a Wald‐type test with a restricted maximum‐likelihood variance estimator for testing non‐inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald‐type statistics with a sample variance estimator and an unrestricted maximum‐likelihood estimator are included in the simulation study. We found that the proposed Wald‐type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials , which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
When several treatment arms are administered along with a control arm in a trial, dropping the non‐promising treatments at an early stage helps to save the resources and expedite the trial. In such adaptive designs with treatment selection, a common selection rule is to pick the most promising treatment, for example, the treatment with the numerically highest mean response, at the interim stage. However, with only a single treatment selected for final evaluation, this selection rule is often too inflexible. We modified this interim selection rule by introducing a flexible selection margin to judge the acceptable treatment difference. Another treatment could be selected at the interim stage in addition to the empirically best one if the differences of the observed treatment effect between them do not exceed this margin. We considered the study starting with two treatment arms and a control arm. We developed hypothesis testing procedures to assess the selected treatment(s) by taking into account the interim selection process. Compared with the one‐winner selection designs, the modified selection rule makes the design more flexible and practical. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Clinical trials incorporating treatment selection at pre-specified interim analyses allow to integrate two clinical studies into a single, confirmatory study. In an adaptive interim analysis, treatment arms are selected based on interim data as well as external information. The specific selection rule does not need to be pre-specified in advance in order to control the multiple type I error rate. We propose an adaptive Dunnett test procedure based on the conditional error rate of the single-stage Dunnett test. The adaptive procedure uniformly improves the classical Dunnett test, which is shown to be strictly conservative if treatments are dropped at interim. The adaptive Dunnett test is compared in a simulation with the classical Dunnett test as well as with adaptive combination tests based on the closure principle. The method is illustrated with a real-data example.  相似文献   

15.
When several treatments are available for evaluation in a clinical trial, different design options are available. We compare multi‐arm multi‐stage with factorial designs, and in particular, we will consider a 2 × 2 factorial design, where groups of patients will either take treatments A, B, both or neither. We investigate the performance and characteristics of both types of designs under different scenarios and compare them using both theory and simulations. For the factorial designs, we construct appropriate test statistics to test the hypothesis of no treatment effect against the control group with overall control of the type I error. We study the effect of the choice of the allocation ratios on the critical value and sample size requirements for a target power. We also study how the possibility of an interaction between the two treatments A and B affects type I and type II errors when testing for significance of each of the treatment effects. We present both simulation results and a case study on an osteoarthritis clinical trial. We discover that in an optimal factorial design in terms of minimising the associated critical value, the corresponding allocation ratios differ substantially to those of a balanced design. We also find evidence of potentially big losses in power in factorial designs for moderate deviations from the study design assumptions and little gain compared with multi‐arm multi‐stage designs when the assumptions hold. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.  相似文献   

16.
In clinical trials with t-distributed test statistics the required sample size depends on the unknown variance. Taking estimates from previous studies often leads to a misspecification of the true value of the variance. Hence, re-estimation of the variance based on the collected data and re-calculation of the required sample size is attractive. We present a flexible method for extensions of fixed sample or group-sequential trials with t-distributed test statistics. The method can be applied at any time during the course of the trial and does not require the necessity to pre-specify a sample size re-calculation rule. All available information can be used to determine the new sample size. The advantage of our method when compared with other adaptive methods is maintenance of the efficient t-test design when no extensions are actually made. We show that the type I error rate is preserved.  相似文献   

17.
In analyzing clinical trials, one important objective is to classify the patients into treatment‐favorable and nonfavorable subgroups. Existing parametric methods are not robust, and the commonly used classification rules ignore the fact that the implications of treatment‐favorable and nonfavorable subgroups can be different. To address these issues, we propose a semiparametric model, incorporating both our knowledge and uncertainty about the true model. The Wald statistics is used to test the existence of subgroups, while the Neyman‐Pearson rule to classify each subject. Asymptotic properties are derived, simulation studies are conducted to evaluate the performance of the method, and then method is used to analyze a real‐world trial data.  相似文献   

18.
Minimization is a dynamic randomization technique that has been widely used in clinical trials for achieving a balance of prognostic factors across treatment groups, but most often it has been used in the setting of equal treatment allocations. Although unequal treatment allocation is frequently encountered in clinical trials, an appropriate minimization procedure for such trials has not been published. The purpose of this paper is to present novel strategies for applying minimization methodology to such clinical trials. Two minimization techniques are proposed and compared by probability calculation and simulation studies. In the first method, called naïve minimization, probability assignment is based on a simple modification of the original minimization algorithm, which does not account for unequal allocation ratios. In the second method, called biased‐coin minimization (BCM), probability assignment is based on allocation ratios and optimized to achieve an ‘unbiased’ target allocation ratio. The performance of the two methods is investigated in various trial settings including different number of treatments, prognostic factors and sample sizes. The relative merits of the different distance metrics are also explored. On the basis of the results, we conclude that BCM is the preferable method for randomization in clinical trials involving unequal treatment allocations. The choice of different distance metrics slightly affects the performance of the minimization and may be optimized according to the specific feature of trials. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
We suggest non-parametric tests for showing non-inferiority of a new treatment compared to a standard therapy when data are censored. To this end the difference and the odds ratio curves of the entire survivor functions over a certain time period are considered. Two asymptotic approaches for solving these testing problems are investigated, which are based on bootstrap approximations. The performance of the test procedures is investigated in a simulation study, and some guidance on which test to use in specific situations is derived. The proposed methods are applied to a trial in which two thrombolytic agents for the treatment on acute myocardial infarction were compared, and to a study on irradiation therapies for advanced non-small-cell lung cancer. Non-inferiority over a large time period of the study can be shown in both cases.  相似文献   

20.
The steps in designing a controlled trial to evaluate a new treatment for inoperable carcinoma of the bronchus are described. The principal outcome measure is survival time from treatment allocation, and the duration of the trial is determined by application of the sequential logrank test. The detailed determination of the stopping rule and the arrangements for analyses are described.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号