首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigated the propagation of population pharmacokinetic information across clinical studies by applying Bayesian techniques. The aim was to summarize the population pharmacokinetic estimates of a study in appropriate statistical distributions in order to use them as Bayesian priors in consequent population pharmacokinetic analyses. Various data sets of simulated and real clinical data were fitted with WinBUGS, with and without informative priors. The posterior estimates of fittings with non-informative priors were used to build parametric informative priors and the whole procedure was carried on in a consecutive manner. The posterior distributions of the fittings with informative priors where compared to those of the meta-analysis fittings of the respective combinations of data sets. Good agreement was found, for the simulated and experimental datasets when the populations were exchangeable, with the posterior distribution from the fittings with the prior to be nearly identical to the ones estimated with meta-analysis. However, when populations were not exchangeble an alternative parametric form for the prior, the natural conjugate prior, had to be used in order to have consistent results. In conclusion, the results of a population pharmacokinetic analysis may be summarized in Bayesian prior distributions that can be used consecutively with other analyses. The procedure is an alternative to meta-analysis and gives comparable results. It has the advantage that it is faster than the meta-analysis, due to the large datasets used with the latter and can be performed when the data included in the prior are not actually available.  相似文献   

2.
This article reports the results of a meta-analysis based on dose–response studies conducted by a large pharmaceutical company between 1998–2009. Data collection targeted efficacy endpoints from all compounds with evidence of clinical efficacy during the time period. Safety data were not extracted. The goal of the meta-analysis was to identify consistent quantitative patterns in dose–response across different compounds and diseases. The article presents summaries of the study designs, including the number of studies conducted for each compound, dosing range, the number of doses evaluated, and the number of patients per dose. The Emax? model, ubiquitous in pharmacology research, was fit for each compound. It described the data well, except for a single compound, which had nonmonotone dose–response. Compound-specific estimates and Bayesian hierarchical modeling showed that dose–response curves for most compounds can be approximated by Emax? models with “Hill” parameters close to 1.0. Summaries of the potency estimates show pharmacometric predictions of potency made before the first dose ranging study within a (1/10, 10) multiple of the final estimates for 90% of compounds. The results of the meta-analysis, when combined with compound-specific information, provide an empirical basis for designing and analyzing new dose finding studies using parametric Emax models and Bayesian estimation with empirically derived prior distributions.  相似文献   

3.
Because power is primarily determined by the number of events in event-based clinical trials, the timing for interim or final analysis of data is often determined based on the accrual of events during the course of the study. Thus, it is of interest to predict early and accurately the time of a landmark interim or terminating event. Existing Bayesian methods may be used to predict the date of the landmark event, based on current enrollment, event, and loss to follow-up, if treatment arms are known. This work extends these methods to the case where the treatment arms are masked by using a parametric mixture model with a known mixture proportion. Posterior simulation using the mixture model is compared with methods assuming a single population. Comparison of the mixture model with the single-population approach shows that with few events, these approaches produce substantially different results and that these results converge as the prediction time is closer to the landmark event. Simulations show that the mixture model with diffuse priors can have better coverage probabilities for the prediction interval than the nonmixture models if a treatment effect is present.  相似文献   

4.
Multi-level repeated ordinal data arise if ordinal outcomes are measured repeatedly in subclusters of a cluster or on subunits of an experimental unit. If both the regression coefficients and the correlation parameters are of interest, the Bayesian hierarchical models have proved to be a powerful tool for analysis with computation being performed by Markov Chain Monte Carlo (MCMC) methods. The hierarchical models extend the random effects models by including a (usually flat) prior on the regression coefficients and parameters in the distribution of the random effects. Because the MCMC can be implemented by the widely available BUGS or WinBUGS software packages, the computation burden of MCMC has been alleviated. However, thoughtfulness is essential in order to use this software effectively to analyze such data with complex structures. For example, we may have to reparameterize the model and standardize the covariates to accelerate the convergence of the MCMC, and then carefully monitor the convergence of the Markov chain. This article aims at resolving these issues in the application of the WinBUGS through the analysis of a real multi-level ordinal data. In addition, we extend the hierarchical model to include a wider class of distributions for the random effects. We propose to use the deviance information criterion (DIC) for model selection. We show that the WinBUGS software can readily implement such extensions and the DIC criterion.  相似文献   

5.
Some clinical trialists, especially those working in rare or pediatric disease, have suggested borrowing information from similar but already-completed clinical trials. This article begins with a case study in which relying solely on historical control information would have erroneously resulted in concluding a significant treatment effect. We then attempt to catalog situations where borrowing historical information may or may not be advisable using a series of carefully designed simulation studies. We use an MCMC-driven Bayesian hierarchical parametric survival modeling approach to analyze data from a sponsor’s colorectal cancer study. We also apply these same models to simulated data comparing the effective historical sample size, bias, 95% credible interval widths, and empirical coverage probabilities across the simulated cases. We find that even after accounting for variations in study design, baseline characteristics, and standard-of-care improvement, our approach consistently identifies Bayesianly significant differences between the historical and concurrent controls under a range of priors on the degree of historical data borrowing. Our simulation studies are far from exhaustive, but inform the design of future trials. When the historical and current controls are dissimilar, Bayesian methods can still moderate borrowing to a more appropriate level by adjusting for important covariates and adopting sensible priors.  相似文献   

6.
Bayesian meta-analysis has been more frequently utilized for synthesizing safety and efficacy information to support landmark decision-making due to its flexibility of incorporating prior information and availability of computing software. However, when the outcome is binary and the events are rare, where event counts can be zero, conventional meta-analysis methods including Bayesian methods may not work well. Several methods have been proposed to tackle this issue but the prior knowledge of event rate was not utilized to increase precision of risk difference estimates. To better estimate risk differences, we propose a new Bayesian method, Beta prior BInomial model for Risk Differences (B-BIRD), which takes into account the prior information of rare events. B-BIRD is illustrated using a real data set of 48 clinical trials about a type 2 diabetes drug. In simulation studies, it performs well in low event rate settings.  相似文献   

7.
In phase II efficacy trials, it is often desirable to assess patient response sequentially. Bayesian framework can be applied to develop sequential stopping rules. A single parametric model is often chosen to characterize the prior beliefs on a test drug, which might not capture adequately the variability associated with prior beliefs from multiple experts or multiple historic data sources. We use a class of mixture priors to develop robust Bayesian stopping rules. We present systematic methods to construct mixture priors and compare stopping rules of the mixture designs with existing designs.  相似文献   

8.
The Bayesian approach has been suggested as a suitable method in the context of mechanistic pharmacokinetic-pharmacodynamic (PK-PD) modeling, as it allows for efficient use of both data and prior knowledge regarding the drug or disease state. However, to this day, published examples of its application to real PK-PD problems have been scarce.

We present an example of a fully Bayesian re-analysis of a previously published mechanistic model describing the time course of circulating neutrophils in stroke patients and healthy individuals.

While priors could be established for all population parameters in the model, not all variability terms were known with any degree of precision. A sensitivity analysis around the assigned priors used was performed by testing three different sets of prior values for the population variance terms for which no data were available in the literature: “informative”, “semi-informative”, and “noninformative”, respectively. For all variability terms, inverse gamma distributions were used.

It was possible to fit the model to the data using the “informative” priors. However, when the “semi-informative” and “noninformative” priors were used, it was impossible to accomplish convergence due to severe correlations between parameters. In addition, due to the complexity of the model, the process of defining priors and running the Markov chains was very time-consuming.

We conclude that the present analysis represents a first example of the fully transparent application of Bayesian methods to a complex, mechanistic PK-PD problem with real data. The approach is time-consuming, but enables us to make use of all available information from data and scientific evidence. Thereby, it shows potential both for detection of data gaps and for more reliable predictions of various outcomes and “what if” scenarios.  相似文献   

9.
Meta-analysis has been widely applied to rare adverse event data because it is very difficult to reliably detect the effect of a treatment on such events in an individual clinical study. However, it is known that standard meta-analysis methods are often biased, especially when the background incidence rate is very low. A recent work by Bhaumik et al. proposed new moment-based approaches under a natural random effects model, to improve estimation and testing of the treatment effect and the between-study heterogeneity parameter. It has been demonstrated that for rare binary events, their methods have superior performance to commonly used meta-analysis methods. However, their comparison does not include any Bayesian methods, although Bayesian approaches are a natural and attractive choice under the random-effects model. In this article, we study a Bayesian hierarchical approach to estimation and testing in meta-analysis of rare binary events using the random effects model in Bhaumik et al. We develop Bayesian estimators of the treatment effect and the heterogeneity parameter, as well as hypothesis testing methods based on Bayesian model selection procedures. We compare them with the existing methods through simulation. A data example is provided to illustrate the Bayesian approach as well.  相似文献   

10.
A meta-analysis of dose–response studies is reported for small-molecule drugs approved by the U.S. Food and Drug Administration (FDA) between January 2009 and May 2014. Summaries of the study designs are presented, including the number of studies conducted for each drug, dosing range, and the number of doses evaluated. Most drugs were studied on a ? 4-fold range. Most of the study designs and their analyses focused on a small number of pairwise comparisons of dose groups to placebo. For the meta-analysis, efficacy endpoints were evaluated at a single landmark time. Safety endpoints were not collected. The commonly used Emax? model was fit for each drug. Due to the limited number of doses and dosing ranges, maximum likelihood estimation applied to drugs separately performed poorly. Bayesian hierarchical models were successfully fit producing Emax? curves that represented the data well. The distributions of the Emax? model parameters were consistent with previously reported distributions estimated from a sponsor-specific meta-analysis of dose response. Assessment of model fit, which focused on potential nonmonotone loss of efficacy at the highest doses, supported the use of the Emax? curves. The meta-analysis provides additional empirical basis for Bayesian prior distributions for model parameters. Supplementary materials for this article are available online.  相似文献   

11.
12.
In drug-drug interaction (DDI) research, a two-drug interaction is usually predicted by individual drug pharmacokinetics (PK). Although subject-specific drug concentration data from clinical PK studies on inhibitor or inducer and substrate PK are not usually published, sample mean plasma drug concentrations and their standard deviations have been routinely reported. Hence there is a great need for meta-analysis and DDI prediction using such summarized PK data. In this study, an innovative DDI prediction method based on a three-level hierarchical Bayesian meta-analysis model is developed. The three levels model sample means and variances, between-study variances, and prior distributions. Through a ketoconazle-midazolam example and simulations, we demonstrate that our meta-analysis model can not only estimate PK parameters with small bias but also recover their between-study and between-subject variances well. More importantly, the posterior distributions of PK parameters and their variance components allow us to predict DDI at both population-average and study-specific levels. We are also able to predict the DDI between-subject/study variance. These statistical predictions have never been investigated in DDI research. Our simulation studies show that our meta-analysis approach has small bias in PK parameter estimates and DDI predictions. Sensitivity analysis was conducted to investigate the influences of interaction PK parameters, such as the inhibition constant Ki, on the DDI prediction.  相似文献   

13.
ABSTRACT

The paradigm shift towards precision medicine reignited interest in determining whether there are differential treatment effects in subgroups of trial participants. Intrinsic to this problem is that any assessment of a differential treatment effect is predicated on being able to estimate the treatment response accurately while satisfying constraints of balancing the risk of overlooking an important subgroup with the potential to make a decision based on a false discovery. While shrinkage models have been widely used to improve accuracy of subgroup parameter estimates by leveraging the relationship between them, there is still a possibility that it can lead to excessively conservative or anti-conservative results. This can possibly be due to the use of the normal distribution as prior, which forces outlying subjects to have their means over-shrunk towards the population mean, and the data from such subjects may be excessively influential in estimation of both the overall mean response and the mean response for each subgroup, or a model misspecification due to unaccounted variation or clustering. To address this issue, we investigate the use of nonparametric Bayes, particularly Dirichlet process priors, to create a flexible shrinkage model. This model represents uncertainty in the prior distribution for the overall response while accommodating heterogeneity among individual subgroups. We simulated data to compare estimates when there is no differential subgroup effect and when there is a differential subgroup effect. In either of these scenarios, the flexible shrinkage model does not force estimates to shrink excessively when similarity of treatment effects is not supported but still retains the attractiveness of improved precision given by the narrower credible intervals. We also applied the same method to a dataset based on trials conducted for an antimicrobial therapy on several related indications.  相似文献   

14.
Reference-based imputation (RBI) methods have been proposed as sensitivity analyses for longitudinal clinical trials with missing data. The RBI methods multiply impute the missing data in treatment group based on an imputation model built using data from the reference (control) group. The RBI will yield a conservative treatment effect estimate as compared to the estimate obtained from multiple imputation (MI) under missing at random (MAR). However, the RBI analysis based on the regular MI approach can be overly conservative because it not only applies discount to treatment effect estimate but also posts penalty on the variance estimate. In this article, we investigate the statistical properties of RBI methods, and propose approaches to derive accurate variance estimates using both frequentist and Bayesian methods for the RBI analysis. Results from simulation studies and applications to longitudinal clinical trial datasets are presented.  相似文献   

15.
Abstract

This article deals with comparing a test with a control therapy using meta-analyses of data from randomized controlled trials with a time-to-event endpoint. Such analyses can often benefit from prior information about the distribution of control group outcomes. One possible source of this information is the published aggregate data about control groups of historical trials from the medical literature. We review methods for making posterior inference about exponentially distributed event times more robust to prior-data conflicts by discounting the prior information based on the extent of observed prior-data conflict. We use simulations to compare analyses without prior information with the meta-analytic combined, meta-analytic predictive and robust meta-analytic predictive approaches, as well as Bayesian model averaging using shrinkage priors. Bayesian model averaging via shrinkage priors with well-chosen hyperpriors performed best in terms of credible interval coverage and mean-squared error across scenarios. For the robust meta-analytic predictive approach, there was little benefit in increasing the weight of the informative mixture components beyond 0.2–0.5. This was the case even when little prior-data conflict was expected, except with very sparse data or substantial between-trial heterogeneity in control group hazard rates. Supplementary materials for this article are available online.  相似文献   

16.
For psychiatric diseases, established mechanistic models are lacking and alternative empirical mathematical structures are usually explored by a trial-and-error procedure. To address this problem, one of the most promising approaches is an automated model-free technique that extracts the model structure directly from the statistical properties of the data. In this paper, a linear-in-parameter modelling approach is developed based on principal component analysis (PCA). The model complexity, i.e. the number of components entering the PCA-based model, is selected by either cross-validation or Mallows’ Cp criterion. This new approach has been validated on both simulated and clinical data taken from a Phase II depression trial. Simulated datasets are generated through three parametric models: Weibull, Inverse Bateman and Weibull-and-Linear. In particular, concerning simulated datasets, it is found that the PCA approach compares very favourably with some of the popular parametric models used for analyzing data collected during psychiatric trials. Furthermore, the proposed method performs well on the experimental data. This approach can be useful whenever a mechanistic modelling procedure cannot be pursued. Moreover, it could support subsequent semi-mechanistic model building.  相似文献   

17.
We provide a set of formulas that allow the combination of separately performed analyses of population pharmacokinetic (PK) studies, without any further computational effort. More specifically, given the point estimates and uncertainties of two population PK analyses, the formulas provide the point estimates and uncertainties of the combined analysis, including the mean population values, the between-subject variability, and the residual variability. To derive the formulas we considered distributional assumptions applicable for the conjugate priors of the Bayesian problem of “unknown mean and variance.” In order to demonstrate the approach, the formulas were applied to an example involving the results of fitting two real experimental datasets. The formulas presented offer an easy-to-use method of combining different analyses particularly applicable to a combination of literature information.  相似文献   

18.
Assessing safety in drug development naturally incorporates an accumulation of knowledge as we progress from one study to another in a clinical development program. Ideally, it is the early clinical trial data that give us the greatest opportunity to leverage relevant historical information into the design and analysis of later phase clinical trials. While Bayesian methods would appear to provide an ideal framework for assessing safety in this context, concerns regarding the formulation and prespecification of a prior have limited their use in practice. More specifically, when information from previous studies is used to form a prior, an implicit assumption of exchangeability is made. However, the possibility of nonexchangeability, which could lead to conflict between prior and the data, cannot be ruled out. Based on these challenges, in this article we outline a number of strategies based on using simple Bayesian methods to assess safety concerns related to a prespecified adverse event. Three approaches to forming a prior distribution will be examined: (i) a single informative conjugate prior; (ii) a meta-analytic-predictive prior (MAP), which comprises of a mixture of conjugate priors; and (iii) a robust mixture prior, involving a combination of either the single conjugate prior or MAP with a noninformative prior. In the final case, when prior-data conflict arises between the historical informative prior and the data collected from the concurrent study, the addition of a noninformative prior serves as an automatic corrective feature. These methods will be illustrated with a motivating example involving the development of a new drug/delivery device for the treatment of agitation in schizophrenia. In addition, a simulation study will examine the performance of each approach to prior specification.  相似文献   

19.
The estimation of 90% parametric confidence intervals (CIs) of mean AUC and Cmax ratios in bioequivalence (BE) tests are based upon the assumption that formulation effects in log-transformed data are normally distributed. To compare the parametric CIs with those obtained from nonparametric methods we performed repeated estimation of bootstrap-resampled datasets. The AUC and Cmax values from 3 archived datasets were used. BE tests on 1,000 resampled datasets from each archived dataset were performed using SAS (Enterprise Guide Ver.3). Bootstrap nonparametric 90% CIs of formulation effects were then compared with the parametric 90% CIs of the original datasets. The 90% CIs of formulation effects estimated from the 3 archived datasets were slightly different from nonparametric 90% CIs obtained from BE tests on resampled datasets. Histograms and density curves of formulation effects obtained from resampled datasets were similar to those of normal distribution. However, in 2 of 3 resampled log (AUC) datasets, the estimates of formulation effects did not follow the Gaussian distribution. Bias-corrected and accelerated (BCa) CIs, one of the nonparametric CIs of formulation effects, shifted outside the parametric 90% CIs of the archived datasets in these 2 non-normally distributed resampled log (AUC) datasets. Currently, the 80~125% rule based upon the parametric 90% CIs is widely accepted under the assumption of normally distributed formulation effects in log-transformed data. However, nonparametric CIs may be a better choice when data do not follow this assumption.  相似文献   

20.
Multi-level repeated ordinal data arise if ordinal outcomes are measured repeatedly in subclusters of a cluster or on subunits of an experimental unit. If both the regression coefficients and the correlation parameters are of interest, the Bayesian hierarchical models have proved to be a powerful tool for analysis with computation being performed by Markov Chain Monte Carlo (MCMC) methods. The hierarchical models extend the random effects models by including a (usually flat) prior on the regression coefficients and parameters in the distribution of the random effects. Because the MCMC can be implemented by the widely available BUGS or WinBUGS software packages, the computation burden of MCMC has been alleviated. However, thoughtfulness is essential in order to use this software effectively to analyze such data with complex structures. For example, we may have to reparameterize the model and standardize the covariates to accelerate the convergence of the MCMC, and then carefully monitor the convergence of the Markov chain. This article aims at resolving these issues in the application of the WinBUGS through the analysis of a real multi-level ordinal data. In addition, we extend the hierarchical model to include a wider class of distributions for the random effects. We propose to use the deviance information criterion (DIC) for model selection. We show that the WinBUGS software can readily implement such extensions and the DIC criterion.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号