首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 640 毫秒
1.
Methods for analyzing clustered survival data are gaining popularity in biomedical research. Naive attempts to fitting marginal models to such data may lead to biased estimators and misleading inference when the size of a cluster is statistically correlated with some cluster specific latent factors or one or more cluster level covariates. A simple adjustment to correct for potentially informative cluster size is achieved through inverse cluster size reweighting. We give a methodology that incorporates this technique in fitting an accelerated failure time marginal model to clustered survival data. Furthermore, right censoring is handled by inverse probability of censoring reweighting through the use of a flexible model for the censoring hazard. The resulting methodology is examined through a thorough simulation study. Also an illustrative example using a real dataset is provided that examines the effects of age at enrollment and smoking on tooth survival.  相似文献   

2.
Doubly-censored data refers to time to event data for which both the originating and failure times are censored. In studies involving AIDS incubation time or survival after dementia onset, for example, data are frequently doubly-censored because the date of the originating event is interval-censored and the date of the failure event usually is right-censored. The primary interest is in the distribution of elapsed times between the originating and failure events and its relationship to exposures and risk factors. The estimating equation approach [Sun et al. (1999). Regression analysis of doubly censored failure time data with applications to AIDS studies. Biometrics 55, 909-914] and its extensions assume the same distribution of originating event times for all subjects. This paper demonstrates the importance of utilizing additional covariates to impute originating event times, i.e., more accurate estimation of originating event times may lead to less biased parameter estimates for elapsed time. The Bayesian MCMC method is shown to be a suitable approach for analyzing doubly-censored data and allows a rich class of survival models. The performance of the proposed estimation method is compared to that of other conventional methods through simulations. Two examples, an AIDS cohort study and a population-based dementia study, are used for illustration. Sample code is shown in Appendix A and Appendix B.  相似文献   

3.
The Cox model with frailties has been popular for regression analysis of clustered event time data under right censoring. However, due to the lack of reliable computation algorithms, the frailty Cox model has been rarely applied to clustered current status data, where the clustered event times are subject to a special type of interval censoring such that we only observe for each event time whether it exceeds an examination (censoring) time or not. Motivated by the cataract dataset from a cross-sectional study, where bivariate current status data were observed for the occurrence of cataracts in the right and left eyes of each study subject, we develop a very efficient and stable computation algorithm for nonparametric maximum likelihood estimation of gamma-frailty Cox models with clustered current status data. The algorithm proposed is based on a set of self-consistency equations and the contraction principle. A convenient profile-likelihood approach is proposed for variance estimation. Simulation and real data analysis exhibit the nice performance of our proposal.  相似文献   

4.
Mixture cure models (MCMs) have been widely used to analyze survival data with a cure fraction. The MCMs postulate that a fraction of the patients are cured from the disease and that the failure time for the uncured patients follows a proper survival distribution, referred to as latency distribution. The MCMs have been extended to bivariate survival data by modeling the marginal distributions. In this paper, the marginal MCM is extended to multivariate survival data. The new model is applicable to the survival data with varied cluster size and interval censoring. The proposed model allows covariates to be incorporated into both the cure fraction and the latency distribution for the uncured patients. The primary interest is to estimate the marginal parameters in the mean structure, where the correlation structure is treated as nuisance parameters. The marginal parameters are estimated consistently by treating the observations within the cluster as independent. The variances of the parameters are estimated by the one-step jackknife method. The proposed method does not depend on the specification of correlation structure. Simulation studies show that the new method works well when the marginal model is correct. The performance of the MCM is also examined when the clustered survival times share common random effect. The MCM is applied to the data from a smoking cessation study.  相似文献   

5.
This note is motivated by recent works of Xie et al. (2009) and Xiang et al. (2007). Herein, we simplify the score statistic presented by Xie et al. (2009) to test overdispersion in the zero-inflated generalized Poisson (ZIGP) mixed model, and discuss an extension to test overdispersion in zero-inflated Poisson (ZIP) mixed models. Examples highlight the application of the extended results. The extensive simulation study for testing overdispersion in the Poisson mixed model indicates that the proposed score statistics maintain the nominal level reasonably well. In practice, the appropriate model is chosen based on the approximate mean-variance relationship in the data, and a formal score test based on asymptotic standard normal distribution can be employed for testing overdispersion. A case study is provided to illustrate procedures for data analysis.  相似文献   

6.
This study provides a supplemental report of the performance of the confidence interval for direct standardized rates obtained by the approximate bootstrap method (ABC) method. The ABC method was not considered by the Ng et al. (2008) paper which compared different methods of interval construction. A graphical comparison of the coverage probability and the ratio of the right to left non-coverage probabilities, as a function of the variance in the weights used for each simulation point, are given for the ABC method, as well as three of the recommended procedures by Ng et al. The expected confidence interval lengths are also reported. The ABC intervals are shown to be good competitors compared with the other three confidence intervals.  相似文献   

7.
In clinical trials, information about certain time points may be of interest in making decisions about treatment effectiveness. Therefore, rather than comparing entire survival curves, researchers may wish to focus the comparison on fixed time points with potential clinical utility. For two independent samples of right-censored data, Klein et al. (2007) compared survival probabilities at a fixed time point by studying a number of tests based on transformations of the Kaplan-Meier estimators of the survival function. To compare the survival probabilities at a fixed time point for paired right-censored data or clustered right-censored data, however, their approach requires modification. In this paper, we extend the statistics to accommodate possible within-pair and within-cluster correlation. We use simulation studies to present comparative results. Finally, we illustrate the implementation of these methods using two real data sets.  相似文献   

8.
Sylvester double sums, introduced first by Sylvester (see [Sylvester, 1840] and [Sylvester, 1853]), are symmetric expressions of the roots of two polynomials, while subresultants are defined through the coefficients of these polynomials (see Apery and Jouanolou (2006) and Basu et al. (2003) for references on subresultants). As pointed out by Sylvester, the two notions are very closely related: Sylvester double sums and subresultants are equal up to a multiplicative non-zero constant in the ground field. Two proofs are already known: that of Lascoux and Pragacz (2003), using Schur functions, and that of d’Andrea et al. (2007), using manipulations of matrices. The purpose of this paper is to give a new simple proof using similar inductive properties of double sums and subresultants.  相似文献   

9.
Wireless Sensor Networks (WSNs) have energy-constraints that restricts to achieve prolonged network lifetime. To optimize energy consumption of sensor nodes, clustering is one of the efficient techniques for minimization of energy conservation in WSNs. This technique sends the collected data towards the SINK based on cluster head (CH) nodes that leads to the saving of energy. WSNs have been faced a crucial issue of fault tolerance and the overall data communication is collapsed due to the failure of cluster head. Various fault-tolerance clustering methods are available for WSNs, but they are not selected the backup nodes properly. The backup nodes’ closeness or location to the other remaining nodes is not considered in these methods. They may increase network overhead with the backup nodes accessibility. A fault-tolerance cluster-based routing method is presented in this paper that aims on providing fault tolerance for relay selection in addition to the data aggregation method for clustered WSNs. The proposed method utilizes backup mechanism & the Particle Swarm Optimization (PSO) to achieve this. Based on the distance from sink, residual energy, and link delay parameters, the CHs are chosen and the network is categorized into the clusters. The Backup CHs are selected by estimating the centrality among the nodes. As a part of intra-cluster communication for reducing the aggregation overhead among CHs, the Aggregator (AG) nodes are deployed in every cluster. So that they act as the bridge between the member nodes and CHs. These AG nodes aggregates the information from member nodes and deliver it to the CHs. The PSO with modified fitness function is used to identify the best relays between AG and member nodes. The proposed mechanism is compared with existing techniques such as EM-LEACH AI-Sodairi and Ouni (2018), QEBSR Rathee et al. (2019), QOS-IHC Singh and Singh (2019), and ML-SEEP Robinson et al. (2019). The simulation results proved that the proposed mechanism reduces overhead by 55% and improve the energy consumption & throughput by 40% & 60% respectively.  相似文献   

10.
An emerging trend in DNA computing consists of the algorithmic analysis of new molecular biology technologies, and in general of more effective tools to tackle computational biology problems. An algorithmic understanding of the interaction between DNA molecules becomes the focus of some research which was initially addressed to solve mathematical problems by processing data within biomolecules. In this paper a novel mechanism of DNA recombination is discussed, that turned out to be a good implementation key to develop new procedures for DNA manipulation (Franco et al., DNA extraction by cross pairing PCR, 2005; Franco et al., DNA recombination by XPCR, 2006; Manca and Franco, Math Biosci 211:282–298, 2008). It is called XPCR as it is a variant of the polymerase chain reaction (PCR), which was a revolution in molecular biology as a technique for cyclic amplification of DNA segments. A few DNA algorithms are proposed, that were experimentally proven in different contexts, such as, mutagenesis (Franco, Biomolecular computing—combinatorial algorithms and laboratory experiments, 2006), multiple concatenation, gene driven DNA extraction (Franco et al., DNA extraction by cross pairing PCR, 2005), and generation of DNA libraries (Franco et al., DNA recombination by XPCR, 2006), and some related ongoing work is outlined.  相似文献   

11.
The Union–Find data structure for maintaining disjoint sets is one of the best known and widespread data structures, in particular the version with constant-time Union and efficient Find. Recently, the question of how to handle deletions from the structure in an efficient manner has been taken up, first by Kaplan et al. (2002) [2] and subsequently by Alstrup et al. (2005) [1]. The latter work shows that it is possible to implement deletions in constant time, without affecting adversely the asymptotic complexity of other operations, even when this complexity is calculated as a function of the current size of the set.In this note we present a conceptual and technical simplification of the algorithm, which has the same theoretical efficiency, and is probably more attractive in practice.  相似文献   

12.
Energy usage has been an important concern in recent research on online scheduling. In this paper, we study the tradeoff between flow time and energy (Albers and Fujiwara in ACM Trans. Algorithms 3(4), 2007; Bansal et al. in Proceedings of ACM-SIAM Symposium on Discrete Algorithms, pp. 805–813, 2007b, Bansal et al. in Proceedings of International Colloquium on Automata, Languages and Programming, pp. 409–420, 2008; Lam et al. in Proceedings of European Symposium on Algorithms, pp. 647–659, 2008b) in the multi-processor setting. Our main result is an enhanced analysis of a simple non-migratory online algorithm called CRR (classified round robin) on m≥2 processors, showing that its flow time plus energy is within O(1) times of the optimal non-migratory offline algorithm, when the maximum allowable speed is slightly relaxed. The result still holds even if the comparison is made against the optimal migratory offline algorithm. This improves previous analysis that CRR is O(log P)-competitive where P is the ratio of the maximum job size to the minimum job size.  相似文献   

13.
This paper proposes a new method and algorithm for predicting multivariate responses in a regression setting. Research into the classification of high dimension low sample size (HDLSS) data, in particular microarray data, has made considerable advances, but regression prediction for high-dimensional data with continuous responses has had less attention. Recently Bair et al. (2006) proposed an efficient prediction method based on supervised principal component regression (PCR). Motivated by the fact that using a larger number of principal components results in better regression performance, this paper extends the method of Bair et al. in several ways: a comprehensive variable ranking is combined with a selection of the best number of components for PCR, and the new method further extends to regression with multivariate responses. The new method is particularly suited to addressing HDLSS problems. Applications to simulated and real data demonstrate the performance of the new method. Comparisons with the findings of Bair et al. (2006) show that for high-dimensional data in particular the new ranking results in a smaller number of predictors and smaller errors.  相似文献   

14.
Code OK1 is a fast and precise three-dimensional computer program designed for simulations of heavy ion beam (HIB) irradiation on a direct-driven spherical fuel pellet in heavy ion fusion (HIF). OK1 provides computational capabilities of a three-dimensional energy deposition profile on a spherical fuel pellet and the HIB irradiation non-uniformity evaluation, which are valuables for optimizations of the beam parameters and the fuel pellet structure, as well for further HIF experiment design. The code is open and complete, and can be easily modified or adapted for users' purposes in this field.

Program summary

Title of program: OK1Catalogue identifier: ADSTProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSTProgram obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandComputer: PC (Pentium 4, ∼1 GHz or more recommended)Operating system: Windows or UNIXProgram language used: C++Memory required to execute with typical data: 911 MBNo. of bits in a word: 32No. of processors used: 1 CPUHas the code been vectorized or parallelized: NoNo. of bytes in distributed program, including test data: 16 557Distribution format: tar gzip fileKeywords: Heavy ion beam, inertial confinement fusion, energy deposition, fuel pelletNature of physical problem: Nuclear fusion energy may have attractive features as one of our human energy resources. In this paper we focus on heavy ion inertial confinement fusion (HIF). Due to a favorable energy deposition behavior of heavy ions in matter [J.J. Barnard et al., UCRL-LR-108095, 1991; C. Deutsch et al., J. Plasma Fusion Res. 77 (2001) 33; T. Someya et al., Fusion Sci. Tech. (2003), submitted] it is expected that heavy ion beam (HIB) would be one of energy driver candidates to operate a future inertial confinement fusion power plant. For a successful fuel ignition and fusion energy release, a stringent requirement is imposed on the HIB irradiation non-uniformity, which should be less than a few percent [T. Someya et al., Fusion Sci. Tech. (2003), submitted; M.H. Emery et al., Phys. Rev. Lett. 48 (1982) 253; S. Kawata et al., J. Phys. Soc. Jpn. 53 (1984) 3416]. In order to meet this requirement we need to evaluate the non-uniformity of a realistic HIB irradiation and energy deposition pattern. The HIB irradiation and non-uniformity evaluations are sophisticated and difficult to calculate analytically. Based on our code one can numerically obtain a three-dimensional profile of energy deposition and evaluate the HIB irradiation non-uniformity onto a spherical target for a specific HIB parameter value set in HIF.Method of solution: OK1 code is based on the stopping power of ions in matter [J.J. Barnard et al., UCRL-LR-108095, 1991; C. Deutsch et al., J. Plasma Fusion Res. 77 (2001) 33; T. Someya et al., Fusion Sci. Tech. (2003), submitted; M.H. Emery et al., Phys. Rev. Lett. 48 (1982) 253; S. Kawata et al., J. Phys. Soc. Jpn. 53 (1984) 3416; T. Mehlhorn, SAND80-0038, 1980; H.H. Andersen, J.F. Ziegler, Pergamon Press, 1977, p. 3]. The code simulates a multi-beam irradiation, obtains the 3D energy deposition profile of the fuel pellet and evaluates the deposition non-uniformity.Restrictions on the complexity of the problem: NoTypical running time: The execution time depends on the number of beams in the simulated irradiation and its characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost of the practical running tests performed, the typical running time for one beam deposition is less than 2 s on a PC with a CPU of Pentium 4, 2.2 GHz (e.g., in Test 2 when the number of beams is 600, the running time is about 18 minutes).Unusual features of the program: No  相似文献   

15.
Various methods and techniques have been proposed in past for improving performance of queries on structured and unstructured data. The paper proposes a parallel B-Tree index in the MapReduce framework for improving efficiency of random reads over the existing approaches. The benefit of using the MapReduce framework is that it encapsulates the complexity of implementing parallelism and fault tolerance from users and presents these in a user friendly way. The proposed index reduces the number of data accesses for range queries and thus improves efficiency. The B-Tree index on MapReduce is implemented in a chained-MapReduce process that reduces intermediate data access time between successive map and reduce functions, and improves efficiency. Finally, five performance metrics have been used to validate the performance of proposed index for range search query in MapReduce, such as, varying cluster size and, size of range search query coverage on execution time, the number of map tasks and size of Input/Output (I/O) data. The effect of varying Hadoop Distributed File System (HDFS) block size and, analysis of the size of heap memory and intermediate data generated during map and reduce functions also shows the superiority of the proposed index. It is observed through experimental results that the parallel B-Tree index along with a chained-MapReduce environment performs better than default non-indexed dataset of the Hadoop and B-Tree like Global Index (Zhao et al., 2012) in MapReduce.  相似文献   

16.
The assumption of equal variance in the normal regression model is not always appropriate. Cook and Weisberg (1983) provide a score test to detect heteroscedasticity, while Patterson and Thompson (1971) propose the residual maximum likelihood (REML) estimation to estimate variance components in the context of an unbalanced incomplete-block design. REML is often preferred to the maximum likelihood estimation as a method of estimating covariance parameters in a linear model. However, outliers may have some effect on the estimate of the variance function. This paper incorporates the maximum trimming likelihood estimation ( [Hadi and Luce?o, 1997] and [Vandev and Neykov, 1998]) in REML to obtain a robust estimation of modelling variance heterogeneity. Both the forward search algorithm of Atkinson (1994) and the fast algorithm of Neykov et al. (2007) are employed to find the resulting estimator. Simulation and real data examples are used to illustrate the performance of the proposed approach.  相似文献   

17.
Consider clustered matched-pair studies for non-inferiority where clusters are independent but units in a cluster are correlated. An inexpensive new procedure and the expensive standard one are applied to each unit and outcomes are binary responses. Appropriate statistics testing non-inferiority of a new procedure have been developed recently by several investigators. In this paper, we investigate power and sample size requirement of the clustered matched-pair study for non-inferiority. Power of a test is related primarily to the number of clusters. The effect of a cluster size on power is secondary. The efficiency of a clustered matched-pair design is inversely related to the intra-class correlation coefficient within a cluster. We present an explicit formula for obtaining the number of clusters for a given cluster size and the cluster size for a given number of clusters for a specific power. We also provide alternative sample size calculations when available information regarding parameters are limited. The formulas can be useful in designing a clustered matched-pair study for non-inferiority. An example for determining sample size to establish non-inferiority for a clustered matched-pair study is illustrated.  相似文献   

18.
In this paper, we explore the distributed detection of sparse signals in energy-limited clustered sensor networks (CSNs). For this problem, the centralized detector based on locally most powerful test (LMPT) methodology that uses the analog data transmitted by all the sensor nodes in CSNs can be easily realized according to the prior work. However, for the centralized LMPT detector, the energy consumption caused by data transmission is excessively high, which makes its implementation in CSNs with limited energy supply impractical. To address this issue, we propose a new detector by combining the advantages of censoring and LMPT strategies, in which both the cluster head (CLH) nodes and the ordinary (ORD) nodes only send data deemed to be informative enough and the fusion center (FC) fuses the received data based on LMPT methodology. The detection performance of the proposed detector, characterized by Fisher Information, is analyzed in the asymptotic regime. Also, we analytically derive the relationship between the detection performance of the proposed censoring-based LMPT (cens-LMPT) detector and the communication rates, both of which are controlled by the censoring thresholds. We present an illustrative example by considering the detection problem with 2-CSNs, i.e., CSNs in which each cluster contains two nodes, and provide corresponding theoretical analysis and simulation results.  相似文献   

19.
Software systems assembled from a large number of autonomous components become an interesting target for formal verification due to the issue of correct interplay in component interaction. State/event LTL (Chaki et al. (2004, 2005) [1] and [2]) incorporates both states and events to express important properties of component-based software systems.The main contribution of this paper is a partial order reduction technique for verification of state/event LTL properties. The core of the partial order reduction is a novel notion of stuttering equivalence which we call state/event stuttering equivalence. The positive attribute of the equivalence is that it can be resolved with existing methods for partial order reduction. State/event LTL properties are, in general, not preserved under state/event stuttering equivalence. To this end we define a new logic, called weak state/event LTL, which is invariant under the new equivalence.To bring some evidence of the method’s efficiency, we present some of the results obtained by employing the partial order reduction technique within our tool for verification of component-based systems modelled using the formalism of component-interaction automata (Brim et al. (2005) [3]).  相似文献   

20.
In the paper, we deal with the notion of an automaton over a changing alphabet, which generalizes the concept of a Mealy-type automaton. We modify the methods based on the idea of a dual automaton and its action used by B. Steinberg et al. (2011) and M. Vorobets and Ya. Vorobets (2007, 2010) [16], [17] and [18] and adapt them to automata over a changing alphabet. We show that this modification provides some naturally defined automaton representations of a free nonabelian group by a 2-state automaton over a changing alphabet.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号