首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A statistical profile is a relationship between a quality characteristic (a response) and one or more explanatory variables to characterize quality of a process or a product. Monitoring profiles or checking the stability of profiles over time, has been extensively studied under the normal response variable, but it has paid a little attention to the profile with the non-normal response variable denoted by generalized linear models (GLM). Whereas, some of the potential applications of profile monitoring are cases where the response can be modelled using logistic profiles entailing binary, nominal and ordinal models. Also, most of existing control charts in this field have been developed by statistical approach and employing machine learning techniques have been rarely addressed in the related literature. Hence, to implement on-line process monitoring of logistic profiles, a novel artificial neural network (ANN) as a control chart with a heuristic training procedure is proposed in this paper. Performance of the proposed approach is investigated and compared using simulation studies in binary and polytomous models based on average run length (ARL) criterion. Simulation results revealed a good performance of the proposed approach. Nevertheless, to enhance the detection ability of the proposed approach more, the idea of combining run-rule which is a supplementary tool for making more sensitive control chart with final statistic is also implemented in this paper. Furthermore, a diagnostic method with machine learning schemes is employed to identify the shifted parameters in the profile. Results indicate the superior performance of the proposed approaches in most of the simulations. Finally, an example is used to illustrate the implementation of the proposed charting scheme.  相似文献   

2.
3.
Longitudinal data refer to the situation where repeated observations are available for each sampled object. Clustered data, where observations are nested in a hierarchical structure within objects (without time necessarily being involved) represent a similar type of situation. Methodologies that take this structure into account allow for the possibilities of systematic differences between objects that are not related to attributes and autocorrelation within objects across time periods. A standard methodology in the statistics literature for this type of data is the mixed effects model, where these differences between objects are represented by so-called “random effects” that are estimated from the data (population-level relationships are termed “fixed effects,” together resulting in a mixed effects model). This paper presents a methodology that combines the structure of mixed effects models for longitudinal and clustered data with the flexibility of tree-based estimation methods. We apply the resulting estimation method, called the RE-EM tree, to pricing in online transactions, showing that the RE-EM tree is less sensitive to parametric assumptions and provides improved predictive power compared to linear models with random effects and regression trees without random effects. We also apply it to a smaller data set examining accident fatalities, and show that the RE-EM tree strongly outperforms a tree without random effects while performing comparably to a linear model with random effects. We also perform extensive simulation experiments to show that the estimator improves predictive performance relative to regression trees without random effects and is comparable or superior to using linear models with random effects in more general situations.  相似文献   

4.
To improve the performance of control charts the conditional decision procedure (CDP) incorporates a number of previous observations into the chart’s decision rule. It is expected that charts with this runs rule are more sensitive to shifts in the process parameter. To signal an out-of-control condition more quickly some charts use a headstart feature. They are referred as charts with fast initial response (FIR). The CDP chart can also be used with FIR. In this article we analyze and compare the performance of geometric CDP charts with and with no FIR. To do it we model the CDP charts with a Markov chain and find closed-form ARL expressions. We find the conditional decision procedure useful when the fraction p of nonconforming units deteriorates. However the CDP chart is not very effective for signaling decreases in p.  相似文献   

5.
The study aims to develop a new control chart model suitable for monitoring the process quality of multistage manufacturing systems.Considering both the auto-correlated process outputs and the correlation occurring between neighboring stages in a multistage manufacturing system, we first propose a new multiple linear regression model to describe their relationship. Then, the multistage residual EWMA and CUSUM control charts are used to monitor the overall process quality of multistage systems. Moreover, an overall run length (ORL) concept is adopted to compare the detecting performance for various multistage residual control charts. Finally, a numerical example with oxide thickness measurements of a three-stage silicon wafer manufacturing process is given to demonstrate the usefulness of our proposed multistage residual control charts in the Phase II monitoring. A computerized algorithm can also be written based on our proposed scheme for the multistage residual EWMA/CUSUM control charts and it may be further converted to an expert and intelligent system. Hopefully, the results of this study can provide a better alternative for detecting process change and serve as a useful guideline for quality practitioners when monitoring and controlling the process quality of multistage systems with auto-correlated data.  相似文献   

6.
In many quality control applications the quality of process or product is characterized and summarized by a relation (profile) between a response variable and one or more explanatory variables. Such profiles can be modeled using linear or nonlinear regression models. In this paper we use artificial neural networks to detect and classify the shifts in linear profiles. Three monitoring methods based on artificial neural networks are developed to monitor linear profiles. Their efficacies are assessed using average run length criterion.  相似文献   

7.
Statistical process monitoring with independent component analysis   总被引:6,自引:0,他引:6  
In this paper we propose a new statistical method for process monitoring that uses independent component analysis (ICA). ICA is a recently developed method in which the goal is to decompose observed data into linear combinations of statistically independent components [1 and 2]. Such a representation has been shown to capture the essential structure of the data in many applications, including signal separation and feature extraction. The basic idea of our approach is to use ICA to extract the essential independent components that drive a process and to combine them with process monitoring techniques. I2, Ie2 and SPE charts are proposed as on-line monitoring charts and contribution plots of these statistical quantities are also considered for fault identification. The proposed monitoring method was applied to fault detection and identification in both a simple multivariate process and the simulation benchmark of the biological wastewater treatment process, which is characterized by a variety of fault sources with non-Gaussian characteristics. The simulation results clearly show the power and advantages of ICA monitoring in comparison to PCA monitoring.  相似文献   

8.
Non-central chi-square charts are more effective than the joint and R charts in detecting small mean shifts or variance changes of a performance variable. However, the cost may be high to monitor a primary quality characteristic, such as the weight of each bag in a cement filling process. It is more economical to monitor a surrogate variable, for example, the milliampere of the load cell. When the correlation of the performance variable of surrogate variable exists, this article proposes a two-stage charting design to monitor either the performance variable or its surrogate variable in an alternating fashion rather than monitoring the performance variable alone. The proposed method simplifies process monitoring when users only concern about whether a process is in control or not. The application of the proposed method and the advantages of the proposed chart over the existing methods are presented through an example. Numerical results show that the proposed chart is insensitive on the correlation of the performance variable and surrogate variable even when the historical information on the correlation coefficient is not very accurate.  相似文献   

9.
A new method is developed to estimate the minimum variance bounds and the achievable variance bounds for the assessment of the batch control system when the iterative learning control is applied. Unlike continuous processes, the performance assessment of batch processes requires particular attention to both disturbance changes and setpoint changes. Because of the intrinsically dynamic operations and the non-linear behavior of batch processes, the conventional approach of controller assessment cannot be directly applied. In this paper, a linear time-variant system for batch processes is used to derive the performance bounds from the routine operating batch data. The bounds at each time point computed from the deterministic setpoint and the stochastic disturbance for the controlled output variance can help create simple monitoring charts. They are used to track the progress easily in each batch run, to monitor the occurrence of the observable upsets, and to accordingly improve the current performance. The applications are discussed through simulation cases to demonstrate the advantages of the proposed strategies.  相似文献   

10.
In many cancer studies and clinical research, repeated observations of response variables are taken over time on each individual in one or more treatment groups. In such cases the repeated observations of each vector response are likely to be correlated and the autocorrelation structure for the repeated data plays a significant role in the estimation of regression parameters. A random intercept model for count data is developed using exact maximum-likelihood estimation via numerical integration. A simulation study is performed to compare the proposed methodology with the traditional generalized linear mixed model (GLMM) approach and with the GLMM when penalized quasi-likelihood method is used to perform maximum-likelihood estimation. The methodology is illustrated by analyzing data sets containing longitudinal measures of number of tumors in an experiment of carcinogenesis to study the influence of lipids in the development of cancer.  相似文献   

11.
Control charts have been widely used for monitoring the functional relationship between a response variable and some explanatory variable(s) (called profile) in various industrial applications. In this article, we propose an easy-to-implement framework for monitoring nonparametric profiles in both Phase I and Phase II of a control chart scheme. The proposed framework includes the following steps: (i) data cleaning; (ii) fitting B-spline models; (iii) resampling for dependent data using block bootstrap method; (iv) constructing the confidence band based on bootstrap curve depths; and (v) monitoring profiles online based on curve matching. It should be noted that, the proposed method does not require any structural assumptions on the data and, it can appropriately accommodate the dependence structure of the within-profile observations. We illustrate and evaluate our proposed framework by using a real data set.  相似文献   

12.
In many cancer studies and clinical research, repeated observations of response variables are taken over time on each individual in one or more treatment groups. In such cases the repeated observations of each vector response are likely to be correlated and the autocorrelation structure for the repeated data plays a significant role in the estimation of regression parameters. A random intercept model for count data is developed using exact maximum-likelihood estimation via numerical integration. A simulation study is performed to compare the proposed methodology with the traditional generalized linear mixed model (GLMM) approach and with the GLMM when penalized quasi-likelihood method is used to perform maximum-likelihood estimation. The methodology is illustrated by analyzing data sets containing longitudinal measures of number of tumors in an experiment of carcinogenesis to study the influence of lipids in the development of cancer.  相似文献   

13.
Sensor networks, communication and financial networks, web and social networks are becoming increasingly important in our day-to-day life. They contain entities which may interact with one another. These interactions are often characterized by a form of autocorrelation, where the value of an attribute at a given entity depends on the values at the entities it is interacting with. In this situation, the collective inference paradigm offers a unique opportunity to improve the performance of predictive models on network data, as interacting instances are labeled simultaneously by dealing with autocorrelation. Several recent works have shown that collective inference is a powerful paradigm, but it is mainly developed with a fully-labeled training network. In contrast, while it may be cheap to acquire the network topology, it may be costly to acquire node labels for training. In this paper, we examine how to explicitly consider autocorrelation when performing regression inference within network data. In particular, we study the transduction of collective regression when a sparsely labeled network is a common situation. We present an algorithm, called CORENA (COllective REgression in Network dAta), to assign a numeric label to each instance in the network. In particular, we iteratively augment the representation of each instance with instances sharing correlated representations across the network. In this way, the proposed learning model is able to capture autocorrelations of labels over a group of related instances and feed-back the more reliable labels predicted by the transduction in the labeled network. Empirical studies demonstrate that the proposed approach can boost regression performances in several spatial and social tasks.  相似文献   

14.
Normality is usually assumed in profile monitoring. However, there are many cases in practice where normality does not hold. In such cases, conventional monitoring techniques may not perform well. In this study, we propose a robust strategy for Phase I monitoring of quality profile data in the presence of non-normality. This strategy consists of three components: modeling of profiles, independent component analysis (ICA) to transform multivariate coefficient estimates in profile modeling to independent univariate data, and univariate nonparametric control charts to detect location/scale shifts in the data. Two methods for multiple change point detection are also studied. The properties of the proposed method are examined in a numerical study and it is applied to optical profiles from low-E glass manufacturing in the case study.  相似文献   

15.
In this paper, a novel data projection method, local and global principal component analysis (LGPCA) is proposed for process monitoring. LGPCA is a linear dimensionality reduction technique through preserving both of local and global information in the observation data. Beside preservation of the global variance information of Euclidean space that principal component analysis (PCA) does, LGPCA is characterized by capturing a good linear embedding that preserves local structure to find meaningful low-dimensional information hidden in the high-dimensional process data. LGPCA-based T2 (D) and squared prediction error (Q) statistic control charts are developed for on-line process monitoring. The validity and effectiveness of LGPCA-based monitoring method are illustrated through simulation processes and Tennessee Eastman process (TEP). The experimental results demonstrate that the proposed method effectively captures meaningful information hidden in the observations and shows superior process monitoring performance compared to those regular monitoring methods.  相似文献   

16.
In some quality control applications, quality of a product or process can be characterized by a relationship between two or more variables that is typically referred to as profile. Moreover, in some situations, there are several correlated quality characteristics, which can be modeled as a set of linear functions of one explanatory variable. We refer to this as multivariate simple linear profiles structure. In this paper, we propose the use of three control chart schemes for Phase II monitoring of multivariate simple linear profiles. The statistical performance of the proposed methods is evaluated in term of average run length criterion and reveals that the control chart schemes are effective in detecting shifts in the process parameters. In addition, the applicability of the proposed methods is illustrated using a real case of calibration application.  相似文献   

17.
Techniques for statistical process control (SPC), such as using a control chart, have recently garnered considerable attention in the software industry. These techniques are applied to manage a project quantitatively and meet established quality and process-performance objectives. Although many studies have demonstrated the benefits of using a control chart to monitor software development processes (SDPs), some controversy exists regarding the suitability of employing conventional control charts to monitor SDPs. One major problem is that conventional control charts require a large amount of data from a homogeneous source of variation when constructing valid control limits. However, a large dataset is typically unavailable for SDPs. Aggregating data from projects with similar attributes to acquire the required number of observations may lead to wide control limits due to mixed multiple common causes when applying a conventional control chart. To overcome these problems, this study utilizes a Q chart for short-run manufacturing processes as an alternative technique for monitoring SDPs. The Q chart, which has early detection capability, real-time charting, and fixed control limits, allows software practitioners to monitor process performance using a small amount of data in early SDP stages. To assess the performance of the Q chart for monitoring SDPs, three examples are utilized to demonstrate Q chart effectiveness. Some recommendations for practical use of Q charts for SDPs are provided.  相似文献   

18.
MVI (Maximum Value Interpolated) is a new proposed method in NDVI profiles maximization instead of MVC (Maximum Value Composite). The MVI method consists of recording not only the NDVI maximum value within a period (e.g., a month), but also the day when that value was recorded. Through simple linear interpolation between these points it is possible to detect, for each time period, a representative NDVI value. Through a profile simulation method it is possible to show that the new method allows a significant error reduction compared with the MVC method.  相似文献   

19.
This paper investigates the application of linear regression models and modeling techniques in predicting freight generation at the national level within the U.S. Specifically, the paper seeks to improve the performance and fit of linear regression models of freight generation. We provide insight into different variable transformation techniques, evaluate the use of spatial regression variables, and apply a spatial regression modeling methodology to correct for spatial autocorrelation. We conclude that the spatial regression model is the preferred specification for freight generation at the national level. The proliferation of Geographic Information Systems (GIS) within planning agencies affords more widespread use of spatial regression and our results indicate this technique would provide improvement to models that have been traditionally limited by insufficient data.  相似文献   

20.
Control chart based on likelihood ratio for monitoring linear profiles   总被引:4,自引:0,他引:4  
A control chart based on the likelihood ratio is proposed for monitoring the linear profiles. The new chart which integrates the EWMA procedure can detect shifts in either the intercept or the slope or the standard deviation, or simultaneously by a single chart which is different from other control charts in literature for linear profiles. The results by Monte Carlo simulation show that our approach has good performance across a wide range of possible shifts. We show that the new method has competitive performance relative to other methods in literature in terms of ARL, and another feature of the new chart is that it can be easily designed. The application of our proposed method is illustrated by a real data example from an optical imaging system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号