共查询到20条相似文献,搜索用时 22 毫秒
1.
We extend the methodology for designs evaluation and optimization in nonlinear mixed effects models with an illustration of the decrease of human immunodeficiency virus viral load after antiretroviral treatment initiation described by a bi-exponential model. We first show the relevance of the predicted standard errors (SEs) given by the computation of the population Fisher information matrix using the R function PFIM, in comparison to those computed with the stochastic approximation expectation-maximization algorithm, implemented in the Monolix software. We then highlight the usefulness of the Fedorov-Wynn (FW) algorithm for designs optimization compared to the Simplex algorithm. From the predicted SE of PFIM, we compute the predicted power of the Wald test to detect a treatment effect as well as the number of subjects needed to achieve a given power. Using the FW algorithm, we investigate the influence of the design on the power and show that, for optimized designs with the same total number of samples, the power increases when the number of subjects increases and the number of samples per subject decreases. A simulation study is also performed with the nlme function of R to confirm this result and show the relevance of the predicted powers compared to those observed by simulation. 相似文献
2.
Population pharmacokinetic (PK) and pharmacodynamic (PD) studies evaluate drug concentration profiles and pharmacological effects over time when standard drug dosage regimens are assigned. They constitute a scientific basis for the determination of the optimal dosage of a new drug. Population PK/PD analyses can be performed on relatively few measures per patient enabling the study of a sizable sample of patients who take the drug over a possibly long period of time. We expose the problem of bias in PK/PD estimators in the presence of partial compliance with assigned treatment as it occurs in practice. We propose to solve this by recording accurate data on a number of previous dose timings and using timing-explicit hierarchical non-linear models for analysis. In practice, we rely on electronic measures of an ambulatory patient's drug dosing histories. Especially for non-linear PD estimation, we found that not only bias can be reduced, but higher precision can also be retrieved from the same number of data points when irregular drug intake times occur in well-controlled studies. We apply methods proposed by Mentré et al. to investigate the information matrix for hierarchical non-linear models. This confirms that a substantial gain in precision can be expected due to irregular drug intakes. Intuitively, this is explained by the fact that regular takers experience a relatively small range of concentrations, which makes it hard to estimate any deviation from linearity in the effect model. We conclude that estimators of PK/PD parameters can benefit greatly from information that enters through greater variation in the drug exposure process. 相似文献
3.
Kush Kapur Runa Bhaumik X. Charlene Tang Kwan Hur Domenic J. Reda Dulal K. Bhaumik 《Statistics in medicine》2014,33(22):3781-3800
In this article, we develop appropriate statistical methods for determining the required sample size while comparing the efficacy of an intervention to a control with repeated binary response outcomes. Our proposed methodology incorporates the complexity of the hierarchical nature of underlying designs and provides solutions when varying attrition rates are present over time. We explore how the between‐subject variability and attrition rates jointly influence the computation of sample size formula. Our procedure also shows how efficient estimation methods play a crucial role in power analysis. A practical guideline is provided when information regarding individual variance component is unavailable. The validity of our methods is established by extensive simulation studies. Results are illustrated with the help of two randomized clinical trials in the areas of contraception and insomnia. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
4.
Individuals infected with the human immunodeficiency virus type 1 (HIV-1) who initiate antiretroviral therapy typically experience a marked decline in concentrations of HIV-1 RNA in plasma. Often, however, viral rebound occurs within the first year of treatment and this rebound may be associated with resistance to antiretroviral therapy. For this reason, it is important to study the patterns of virological response of HIV-1 RNA to treatment. In particular, there is interest in the relationship between the lowest level of plasma HIV-1 RNA attained after initiation of therapy (nadir value) and the time until rebound. To investigate this question, we implement a simple and flexible non-linear mixed effects model for the trajectory of the HIV-1 RNA until rebound. This model is also consistent with biological insights into the effects of treatment. We also show how the problem of censoring of HIV-1 RNA values at the lower limit of assay quantification can be addressed using a multiple imputation scheme. The algorithm is simple to implement and is based on accessible software. Our application makes use of data from clinical trial 315 conducted by the AIDS Clinical Trials Group (ACTG 315). We find a strong relationship between HIV-1 RNA nadir and time to rebound, with potentially important consequences for the management of HIV-infected individuals. 相似文献
5.
We propose a methodology for evaluation of agreement between two methods of measuring a continuous variable whose variability changes with magnitude. This problem routinely arises in method comparison studies that are common in health‐related disciplines. Assuming replicated measurements, we first model the data using a heteroscedastic mixed‐effects model, wherein a suitably defined true measurement serves as the variance covariate. Fitting this model poses some computational difficulties as the likelihood function is not available in a closed form. We deal with this issue by suggesting four estimation methods to obtain approximate maximum likelihood estimates. Two of these methods are based on numerical approximation of the likelihood, and the other two are based on approximation of the model. Next, we extend the existing agreement evaluation methodology designed for homoscedastic data to work under the proposed heteroscedastic model. This methodology can be used with any scalar measure of agreement. Simulations show that the suggested inference procedures generally work well for moderately large samples. They are illustrated by analyzing a data set of cholesterol measurements. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
6.
Longitudinal studies often gather joint information on time to some event (survival analysis, time to dropout) and serial outcome measures (repeated measures, growth curves). Depending on the purpose of the study, one may wish to estimate and compare serial trends over time while accounting for possibly non-ignorable dropout or one may wish to investigate any associations that may exist between the event time of interest and various longitudinal trends. In this paper, we consider a class of random-effects models known as shared parameter models that are particularly useful for jointly analysing such data; namely repeated measurements and event time data. Specific attention will be given to the longitudinal setting where the primary goal is to estimate and compare serial trends over time while adjusting for possible informative censoring due to patient dropout. Parametric and semi-parametric survival models for event times together with generalized linear or non-linear mixed-effects models for repeated measurements are proposed for jointly modelling serial outcome measures and event times. Methods of estimation are based on a generalized non-linear mixed-effects model that may be easily implemented using existing software. This approach allows for flexible modelling of both the distribution of event times and of the relationship of the longitudinal response variable to the event time of interest. The model and methods are illustrated using data from a multi-centre study of the effects of diet and blood pressure control on progression of renal disease, the modification of diet in renal disease study. 相似文献
7.
We present a regression model for the joint analysis of longitudinal multiple source Gaussian data. Longitudinal multiple source data arise when repeated measurements are taken from two or more sources, and each source provides a measure of the same underlying variable and on the same scale. This type of data generally produces a relatively large number of observations per subject; thus estimation of an unstructured covariance matrix often may not be possible. We consider two methods by which parsimonious models for the covariance can be obtained for longitudinal multiple source data. The methods are illustrated with an example of multiple informant data arising from a longitudinal interventional trial in psychiatry. 相似文献
8.
Xie X Xue X Gange SJ Strickler HD Kim MY Study Group WH 《Statistics in medicine》2012,31(21):2275-2289
Statistical approaches for estimating and drawing inference on the correlation between two biomarkers that are repeatedly assessed over time and subject to left‐censoring because minimum detection levels are lacking. We propose a linear mixed‐effects model and estimate the parameters with the Monte Carlo expectation maximization (MCEM) method. Inferences regarding the model parameters and the correlation between the biomarkers are performed by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCEM method with existing methods including the maximum likelihood estimation method, the multiple imputation method, and two widely used ad hoc approaches: replacing the censored values with the detection limit or with half of the detection limit. The results show that the performance of the MCEM with respect to relative bias and coverage probability for the 95% confidence interval is superior to the detection limit and half of the detection limit approaches and exceeds that of the multiple imputation method at medium to high levels of censoring, and the standard error estimates from the MCEM method are close to ideal. The maximum likelihood estimation method can estimate the parameters accurately; however, a nonpositive definite information matrix can occur so that the variances are not estimable. These five methods are illustrated with data from a longitudinal human immunodeficiency virus study to estimate and draw inference on the correlation between human immunodeficiency virus RNA levels measured in plasma and in cervical secretions at multiple time points. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
9.
Modelling HIV dynamics has played an important role in understanding the pathogenesis of HIV infection in the past several years. Non-linear parametric models, derived from the mechanisms of HIV infection and drug action, have been used to fit short-term clinical data from AIDS clinical trials. However, it is found that the parametric models may not be adequate to fit long-term HIV dynamic data. To preserve the meaningful interpretation of the short-term HIV dynamic models as well as to characterize the long-term dynamics, we introduce a class of semi-parametric non-linear mixed-effects (NLME) models. The models are non-linear in population characteristics (fixed effects) and individual variations (random effects), both of which are modelled semi-parametrically. A basis-based approach is proposed to fit the models, which transforms a general semi-parametric NLME model into a set of standard parametric NLME models, indexed by the bases used. The bases that we employ are natural cubic splines for easy implementation. The resulting standard NLME models are low-dimensional and easy to solve. Statistical inferences that include testing parametric against semi-parametric mixed-effects are investigated. Innovative bootstrap procedures are developed for simulating the empirical distributions of the test statistics. Small-scale simulation and bootstrap studies show that our bootstrap procedures work well. The proposed approach and procedures are applied to long-term HIV dynamic data from an AIDS clinical study. 相似文献
10.
In this article, we implement a practical computational method for various semiparametric mixed effects models, estimating nonlinear functions by penalized splines. We approximate the integration of the penalized likelihood with respect to random effects with the use of adaptive Gaussian quadrature, which we can conveniently implement in SAS procedure NLMIXED. We carry out the selection of smoothing parameters through approximated generalized cross‐validation scores. Our method has two advantages: (1) the estimation is more accurate than the current available quasi‐likelihood method for sparse data, for example, binary data; and (2) it can be used in fitting more sophisticated models. We show the performance of our approach in simulation studies with longitudinal outcomes from three settings: binary, normal data after Box–Cox transformation, and count data with log‐Gamma random effects. We also develop an estimation method for a longitudinal two‐part nonparametric random effects model and apply it to analyze repeated measures of semicontinuous daily drinking records in a randomized controlled trial of topiramate. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
11.
Non-linear mixed-effects models are powerful tools for modelling HIV viral dynamics. In AIDS clinical trials, the viral load measurements for each subject are often sparse. In such cases, linearization procedures are usually used for inferences. Under such linearization procedures, however, standard covariate selection methods based on the approximate likelihood, such as the likelihood ratio test, may not be reliable. In order to identify significant host factors for HIV dynamics, in this paper we consider two alternative approaches for covariate selection: one is based on individual non-linear least square estimates and the other is based on individual empirical Bayes estimates. Our simulation study shows that, if the within-individual data are sparse and the between-individual variation is large, the two alternative covariate selection methods are more reliable than the likelihood ratio test, and the more powerful method based on individual empirical Bayes estimates is especially preferable. We also consider the missing data in covariates. The commonly used missing data methods may lead to misleading results. We recommend a multiple imputation method to handle missing covariates. A real data set from an AIDS clinical trial is analysed based on various covariate selection methods and missing data methods. 相似文献
12.
Studies of HIV dynamics in AIDS research are very important in understanding the pathogenesis of HIV‐1 infection and also in assessing the effectiveness of antiviral therapies. Nonlinear mixed‐effects (NLME) models have been used for modeling between‐subject and within‐subject variations in viral load measurements. Mostly, normality of both within‐subject random error and random‐effects is a routine assumption for NLME models, but it may be unrealistic, obscuring important features of between‐subject and within‐subject variations, particularly, if the data exhibit skewness. In this paper, we develop a Bayesian approach to NLME models and relax the normality assumption by considering both model random errors and random‐effects to have a multivariate skew‐normal distribution. The proposed model provides flexibility in capturing a broad range of non‐normal behavior and includes normality as a special case. We use a real data set from an AIDS study to illustrate the proposed approach by comparing various candidate models. We find that the model with skew‐normality provides better fit to the observed data and the corresponding estimates of parameters are significantly different from those based on the model with normality when skewness is present in the data. These findings suggest that it is very important to assume a model with skew‐normal distribution in order to achieve robust and reliable results, in particular, when the data exhibit skewness. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
13.
Common problems to many longitudinal HIV/AIDS, cancer, vaccine, and environmental exposure studies are the presence of a lower limit of quantification of an outcome with skewness and time‐varying covariates with measurement errors. There has been relatively little work published simultaneously dealing with these features of longitudinal data. In particular, left‐censored data falling below a limit of detection may sometimes have a proportion larger than expected under a usually assumed log‐normal distribution. In such cases, alternative models, which can account for a high proportion of censored data, should be considered. In this article, we present an extension of the Tobit model that incorporates a mixture of true undetectable observations and those values from a skew‐normal distribution for an outcome with possible left censoring and skewness, and covariates with substantial measurement error. To quantify the covariate process, we offer a flexible nonparametric mixed‐effects model within the Tobit framework. A Bayesian modeling approach is used to assess the simultaneous impact of left censoring, skewness, and measurement error in covariates on inference. The proposed methods are illustrated using real data from an AIDS clinical study. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
14.
J.M. Madden L.D. Browne X. Li P.M. Kearney A.P. Fitzgerald 《Statistics in medicine》2018,37(10):1682-1695
Blood pressure (BP) fluctuates throughout the day. The pattern it follows represents one of the most important circadian rhythms in the human body. For example, morning BP surge has been suggested as a potential risk factor for cardiovascular events occurring in the morning, but the accurate quantification of this phenomenon remains a challenge. Here, we outline a novel method to quantify morning surge. We demonstrate how the most commonly used method to model 24‐hour BP, the single cosinor approach, can be extended to a multiple‐component cosinor random‐effects model. We outline how this model can be used to obtain a measure of morning BP surge by obtaining derivatives of the model fit. The model is compared with a functional principal component analysis that determines the main components of variability in the data. Data from the Mitchelstown Study, a population‐based study of Irish adults (n = 2047), were used where a subsample (1207) underwent 24‐hour ambulatory blood pressure monitoring. We demonstrate that our 2‐component model provided a significant improvement in fit compared with a single model and a similar fit to a more complex model captured by b‐splines using functional principal component analysis. The estimate of the average maximum slope was 2.857 mmHg/30 min (bootstrap estimates; 95% CI: 2.855‐2.858 mmHg/30 min). Simulation results allowed us to quantify the between‐individual SD in maximum slopes, which was 1.02 mmHg/30 min. By obtaining derivatives we have demonstrated a novel approach to quantify morning BP surge and its variation between individuals. This is the first demonstration of cosinor approach to obtain a measure of morning surge. 相似文献
15.
Mallinckrodt CH Detke MJ Kaiser CJ Watkin JG Molenberghs G Carroll RJ 《Statistics in medicine》2006,25(14):2384-2397
BACKGROUND: It has been recommended that onset of antidepressant action be assessed using survival analyses with assessments taken at least twice per week. However, such an assessment schedule is problematic to implement. The present study assessed the feasibility of comparing onset of action between treatments using a categorical repeated measures approach with a traditional assessment schedule. METHOD: Four scenarios representative of antidepressant clinical trials were created by varying mean improvements over time. Two assessment schedules were compared within the simulated 8-week studies: (i) 'frequent' assessment--16 postbaseline visits (twice-weekly for 8 weeks); (ii) 'traditional' assessment--5 postbaseline visits (Weeks 1, 2, 4, 6, and 8). Onset was defined as a 20 per cent improvement from baseline, and had to be sustained at all subsequent assessments. Differences between treatments were analysed with a survival analysis (KM = Kaplan-Meier product limit method) and a categorical mixed-effects model repeated measures analysis (MMRM-CAT). RESULTS: More frequent assessments resulted in small reductions in empirical standard errors compared with traditional assessments for both analytic methods. More frequent assessments altered estimates of treatment group differences in KM such that power was increased when the difference between treatments was increasing over time, but power decreased when the treatment difference decreased over time. More frequent assessments had a minimal effect on estimates of treatment group differences in MMRM-CAT. The MMRM-CAT analysis of data from a traditional assessment schedule provided adequate control of type I error, and had power comparable to or greater than that with KM analyses of data from either a frequent or a traditional assessment schedule. CONCLUSION: In the scenarios tested in this study it was reasonable to assess treatment group differences in onset of action with MMRM-CAT and a traditional assessment schedule. Additional research is needed to assess whether these findings hold in data with drop-out and across definitions of onset. 相似文献
16.
Zhiwu Yan 《Statistics in medicine》2013,32(6):956-963
We investigate the impact of baseline covariates on the efficiency of statistical analyses of crossover designs. For practical considerations, we contemplate two different baseline methods: study baselines and period‐dependent baselines. For each baseline method, we establish analytical upper bounds for the relative efficiency of a large class of crossover designs, the totally balanced designs, under a model with the baseline covariates as compared with the model without the baseline covariates. We present numerical details based on these bounds for assorted scenarios and reveal implications of these results. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
17.
Rolando De la Cruz Claudio Fuentes Cristian Meza Dae‐Jin Lee Ana Arribas‐Gil 《Statistics in medicine》2017,36(13):2120-2134
We propose a semiparametric nonlinear mixed‐effects model (SNMM) using penalized splines to classify longitudinal data and improve the prediction of a binary outcome. The work is motivated by a study in which different hormone levels were measured during the early stages of pregnancy, and the challenge is using this information to predict normal versus abnormal pregnancy outcomes. The aim of this paper is to compare models and estimation strategies on the basis of alternative formulations of SNMMs depending on the characteristics of the data set under consideration. For our motivating example, we address the classification problem using a particular case of the SNMM in which the parameter space has a finite dimensional component (fixed effects and variance components) and an infinite dimensional component (unknown function) that need to be estimated. The nonparametric component of the model is estimated using penalized splines. For the parametric component, we compare the advantages of using random effects versus direct modeling of the correlation structure of the errors. Numerical studies show that our approach improves over other existing methods for the analysis of this type of data. Furthermore, the results obtained using our method support the idea that explicit modeling of the serial correlation of the error term improves the prediction accuracy with respect to a model with random effects, but independent errors. Copyright © 2017 John Wiley & Sons, Ltd. 相似文献
18.
It is a common practice to analyze complex longitudinal data using nonlinear mixed‐effects (NLME) models with normality assumption. The NLME models with normal distributions provide the most popular framework for modeling continuous longitudinal outcomes, assuming individuals are from a homogeneous population and relying on random‐effects to accommodate inter‐individual variation. However, the following two issues may standout: (i) normality assumption for model errors may cause lack of robustness and subsequently lead to invalid inference and unreasonable estimates, particularly, if the data exhibit skewness and (ii) a homogeneous population assumption may be unrealistically obscuring important features of between‐subject and within‐subject variations, which may result in unreliable modeling results. There has been relatively few studies concerning longitudinal data with both heterogeneity and skewness features. In the last two decades, the skew distributions have shown beneficial in dealing with asymmetric data in various applications. In this article, our objective is to address the simultaneous impact of both features arisen from longitudinal data by developing a flexible finite mixture of NLME models with skew distributions under Bayesian framework that allows estimates of both model parameters and class membership probabilities for longitudinal data. Simulation studies are conducted to assess the performance of the proposed models and methods, and a real example from an AIDS clinical trial illustrates the methodology by modeling the viral dynamics to compare potential models with different distribution specifications; the analysis results are reported. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
19.
Clinical trials requiring the collection of pharmacokinetic information often specify blood samples to be taken at fixed times. This may be feasible when trial participants are in a controlled environment such as in early phase clinical trials, however it becomes problematic in trials where patients are in an out-patient clinic setting such as in late phase drug development. In such a situation it is common to take blood samples when it is convenient for all involved and may result in data that are uninformative. This paper proposes an approach to pharmacokinetic study design that allows greater flexibility as to when blood samples can be taken and still result in data that allows satisfactory parameter estimation. The sampling window approach proposed in this paper is based on determining time intervals around the D-optimum pharmacokinetic sampling times. These intervals are determined by allowing the sampling window design to result in a specified level of efficiency when compared to the fixed times D-optimum design. Several approaches are suggested for dealing with this design problem. 相似文献
20.
In pharmacokinetics, compartment models are often used to describe the time course of blood concentration after the administration of a drug. In this article, we propose an optimal design criterion for precise estimation of parameters included in the compartment model and illustrate the non-sequential design of sampling times of blood drug concentration data in individual pharmacokinetics. The proposed optimal design criterion minimizes the determinant of the mean-squared error matrix of the parameter estimator that is quadratically approximated by the curvature array. Therefore, the proposed criterion considers the intrinsic and parameter-effects nonlinearity underlying the compartment model, and so is applicable in a pharmacokinetic experiment where the sample size of the blood drug concentration data is quite small. 相似文献