首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This research proposes an on-line diagnosis system based on denoising and clustering techniques to identify spatial defect patterns for semiconductor manufacturing. Today, even with highly automated and precisely monitored facilities used in a near dust-free clean room and operated with well-trained process engineers, the occurrence of spatial signatures on the wafer still cannot be avoided. Typical defect patterns shown on the wafer, including edge ring, linear scratch, zone type and mixed type, usually contain important information for quality engineers to remove their root causes of failures. In this paper, a spatial filter is simultaneously used to judge whether the input data contains any systematic cluster and to extract it from the noisy input. Then, an integrated clustering scheme combining fuzzy C means (FCM) with hierarchical linkage is adopted to separate various types of defect patterns. Furthermore, a decision tree based on two cluster features (convexity and eigenvalue ratio) is applied to a separated pattern to provide decision support for quality engineers. Experimental results show that both real dataset and synthetic dataset have been successfully extracted and classified. More importantly, the proposed method has potential to be further applied to other industries, such as liquid crystal display (LCD) and plasma display panel (PDP).  相似文献   

2.
Defects on semiconductor wafers tend to cluster and the spatial defect patterns of these defect clusters contain valuable information about potential problems in the manufacturing processes. This study proposes a model-based clustering algorithm for automatic spatial defect recognition on semiconductor wafers. A mixture model is proposed to model the distributions of defects on wafer surfaces. The proposed algorithm can find the number of defect clusters and identify the pattern of each cluster automatically. It is capable of detecting defect clusters with linear patterns, curvilinear patterns and ellipsoidal patterns. Promising results have been obtained from simulation studies.  相似文献   

3.
A variational approach to linear elasticity problems is considered. The family of variational principles is proposed based on the linear theory of elasticity and the method of integrodifferential relations. The idea of this approach is that the constitutive relation is specified by an integral equality instead of the local Hooke’s law and the modified boundary value problem is reduced to the minimization of a nonnegative functional over all admissible displacements and equilibrium stresses. The conditions of decomposition on two separated problems with respect to displacements and stresses are found for the variational problems formulated and the relation between the approach under consideration and the minimum principles for potential and complementary energies is shown. The effective local and integral criteria of solution quality are proposed. A numerical algorithm based on the piecewise polynomial approximations of displacement and stress fields over an arbitrary domain triangulation are worked out to obtained numerical solutions and estimate their convergence rates. Numerical results for 2D linear elasticity problems with cracks are presented and discussed.  相似文献   

4.
Classification of defect chip patterns is one of the most important tasks in semiconductor manufacturing process. During the final stage of the process just before release, engineers must manually classify and summarise information of defect chips from a number of wafers that can aid in diagnosing the root causes of failures. Traditionally, several learning algorithms have been developed to classify defect patterns on wafer maps. However, most of them focused on a single wafer bin map based on certain features. The objective of this study is to propose a novel approach to classify defect patterns on multiple wafer maps based on uncertain features. To classify distinct defect patterns described by uncertain features on multiple wafer maps, we propose a generalised uncertain decision tree model considering correlations between uncertain features. In addition, we propose an approach to extract uncertain features of multiple wafer maps from the critical fail bit test (FBT) map, defect shape, and location based on a spatial autocorrelation method. Experiments were conducted using real-life DRAM wafers provided by the semiconductor industry. Results show that the proposed approach is much better than any existing methods reported in the literature.  相似文献   

5.
Recently, several methods of time integration of the equations of motion have been proposed. Many of them result in square mass, damping and stiffness matrices. The space–time finite element method is an extension of the FEM, familiar to most engineers, over the time domain. A special approach enables the use of triangular, tetrahedral and hyper-tetrahedral elements in time and space. By special division of the space–time layer the triangular matrix of coefficients in the system of equations can be obtained. A simple algorithm enables the storage of non-zero coefficients only. Dynamic solution requires a small amount of the memory compared to other methods, and ensures considerable reduction of arithmetic operations. The method presented is also efficient in solving both linear and non-linear problems. Matrices for a beam and plane stress/strain element are derived. Exemplary problems solved by the method described have proved the effectiveness of the application of triangular and tetrahedral space–time elements in vibration analysis.  相似文献   

6.
The lens module is a critical part of the camera module. The quality of the lens module significantly influences the auto-focus and image stability functions of the camera module. A new approach that uses sequential tests is proposed to select the alternative suppliers that provide the qualified parts as the current supplier under the linear profile data. Having several qualified alternative suppliers can reduce the dependency on one supplier, improve bargaining power, and reduce capacity risk. The lens displacement that has a linear relationship with current is the quality characteristic for evaluating the lens module suppliers. To select the qualified alternative suppliers, the proposed sequential approach tests the profile difference between the current supplier and the investigated suppliers. The simulation results show that the power of the sequential approach is higher than the simultaneous confidence bands method in differentiating profiles. Last, the proposed approach is effectively applied to select the qualified alternative lens module suppliers for the camera module manufacturer. Procuring the lens module from the selected suppliers can maintain production quality and flexibility for the camera module manufacturer in practice.  相似文献   

7.
Change point estimation is a useful concept that helps quality engineers to effectively search for assignable causes and improve quality of the process or product. In this paper, the maximum likelihood approach is developed to estimate change point in the mean of multivariate linear profiles in Phase II. After the change point, parameters are estimated through filtering and smoothing approaches in dynamic linear model. The proposed change point estimator can be applied without any prior knowledge about the change type against existing estimators which assume change type is known in advance. Besides, sporadic change point can be identified as well. Simulation results show the effectiveness of the proposed estimators to estimate step, drift and monotonic, as well as sporadic changes in small to large shifts. In addition, effect of different values of the Multivariate Exponentially Weighted Moving Average (MEWMA) control chart smoothing coefficient on the performance of the proposed estimator is investigated presenting that the smoothing estimator has more uniform performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Robust Design is an important method for improving product quality, manufacturability, and reliability at low cost. Taguchi's introduction of this method in 1980 to several major American industries resulted in significant quality improvement in product and manufacturing process design. While the robust design objective of making product performance insensitive to hard-to-control noise was recognized to be very important, many of the statistical methods proposed by Taguchi, such as the use of signal-to-noise ratios, orthogonal arrays, linear graphs, and accumulation analysis, have room for improvement. To popularize me use of robust design among engineers, it is essential to develop more effective, statistically efficient, and user-friendly tech niques and tools. This paper first summarizes the statistical methods for planning and analyzing robust design experiments originally proposed by Taguchi; then reviews newly developed statistical methods and identifies areas and problems where more research is needed. For planning experiments, we review a new experiment format, the combined array format, which can reduce the experiment size and allow greater flexibility for estimating effects which may be more important for physical reasons. We also discuss design strategies, alternative graphical tools and tables, and computer algorithms to help engineers plan more efficient experi ments. For analyzing experiments, we review a new modeling approach, die response model approach, which yields additional information about how control factor settings dampen the effects of individual noise factors; this helps engineers better under stand die physical mechanism of the product or process. We also discuss alternative variability measures for Taguchi's signal-to-noise ratios and develop methods for empirically determining the appropriate measure to use.  相似文献   

9.
In our opinion, many of complex numerical models in materials science can be reduced without losing their physical sense. Due to solution bifurcation and strain localization of continuum damage problems, damage predictions are very sensitive to any model modification. Most of the robust numerical algorithms intend to forecast one approximate solution of the continuous model despite there are multiple solutions. Some model perturbations can possibly be added to the finite element model to guide the simulation toward one of the solutions. Doing a model reduction of a finite element damage model is a kind of model perturbation. If no quality control is performed the prediction of the reduced-order model (ROM) can really differ from the prediction of the full finite element model. This can happen using the snapshot Proper Orthogonal Decomposition (POD) model reduction method. Therefore, if the expected purpose of the reduced approximation is to estimate the solution that the finite element simulation should give, an adaptive reduced-order modeling is required when reducing finite element damage models.We propose an adaptive reduced-order modeling method that enables to estimate the effect of loading modifications. The Rousselier continuum damage model is considered. The differences between the finite element prediction and the one provided by the adapted reduced-order model (ROM) remain stable although various loading perturbations are introduced. The adaptive algorithm is based on the APHR (A Priori Hyper Reduction) method. This is an incremental scheme using a ROM to forecast an initial guess solution to the finite element equations. If, at the end of a time increment, this initial prediction is not accurate enough, a finite element correction is added to the ROM prediction. The proposed algorithm can be viewed as a two step Newton–Raphson algorithm. During the first step the prediction belongs to the functional space related to the ROM and during the second step the correction belongs to the classical FE functional space. Moreover the corrections of the ROM predictions enable to expand the basis related to the ROM. Therefore the ROM basis can be improved at each increment of the simulation. The efficiency of the adaptive algorithm is checked comparing the amount of global linear solutions involved in the proposed scheme versus the amount of global linear solutions involved in the classical incremental Newton–Raphson scheme. The quality of the proposed approximation is compared to the one provided by the classical snapshot Proper Orthogonal Decomposition (POD) method.  相似文献   

10.
The integrated circuits (ICs) on wafers are highly vulnerable to defects generated during the semiconductor manufacturing process. The spatial patterns of locally clustered defects are likely to contain information related to the defect generating mechanism. For the purpose of yield management, we propose a multi-step adaptive resonance theory (ART1) algorithm in order to accurately recognise the defect patterns scattered over a wafer. The proposed algorithm consists of a new similarity measure, based on the p-norm ratio and run-length encoding technique and pre-processing procedure: the variable resolution array and zooming strategy. The performance of the algorithm is evaluated based on the statistical models for four types of simulated defect patterns, each of which typically occurs during fabrication of ICs: random patterns by a spatial homogeneous Poisson process, ellipsoid patterns by a multivariate normal, curvilinear patterns by a principal curve, and ring patterns by a spherical shell. Computational testing results show that the proposed algorithm provides high accuracy and robustness in detecting IC defects, regardless of the types of defect patterns residing on the wafer.  相似文献   

11.
Knowing the time of changes in mean and variance in a process is crucial for engineers to identify the special cause quickly and correctly. Because assignable causes may give rise to changes in mean and variance at the same time, monitoring the mean and variance simultaneously is required. In this paper, a mixture likelihood approach is proposed to detect shifts in mean and variance simultaneously in a normal process. We first transfer the change point model formulation into a mixture model and then employ the expectation and maximization algorithm to estimate the time of shifts in mean and variance simultaneously. The proposed method called EMCP (expectation and maximization change point) can be used in both phase I and II applications without the knowledge of in‐control process parameters. Moreover, EMCP can detect the time of multiple shifts and simultaneously produce the estimates of shifts in each individual segment. Numerical data and real datasets are employed to compare EMCP with the direct statistical maximum likelihood method without the use of mixture models. The experimental results show the superiority and effectiveness of the proposed EMCP. The outperformance of EMCP in detecting the time of small shifts is particularly important and beneficial for engineers to identify assignable causes rapidly and accurately in phase II applications in which small shifts occur more often and hence lead to a large average run length. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Z. Xu 《国际生产研究杂志》2013,51(11):2091-2117
To take full advantage of product modularity, modular product design and assembly system design/sreconfiguration have to be simultaneously addressed. The emerging reconfigurable production and flexible assembly techniques have made such an integrated approach possible. As such, this paper proposes an integrated approach to product module selection and assembly line design/reconfiguration problems. It further suggests that quality loss functions be used in a generic sense to quantify non-comparable and possibly conflicting performance criteria involved in the integrated problem. The complexity of the problem precludes the use of commercial software for solving meaningful sized problems in polynomial time. A genetic algorithm is therefore developed to provide quick solutions. An example problem is solved to illustrate the application of the proposed approach. Based on 72 randomly generated test problems, ANOVA analysis is further carried out to investigate the effects of genetic algorithm parameters. The convergence behaviour of the search processes is also examined by solving large problems with different numbers of operations and product modules.  相似文献   

13.
A key element for many fading-compensation techniques is a (long-range) prediction tool for the fading channel. A linear approach, usually used to model the time evolution of the fading process, does not perform well for long-range prediction applications. An adaptive fading channel prediction algorithm using a sum-sinusoidalbased state-space approach is proposed. This algorithm utilises an improved adaptive Kalman estimator, comprising an acquisition mode and a tracking algorithm. Furthermore, for the sake of a lower computational complexity, an enhanced linear predictor for channel fading is proposed, including a multi-step AR predictor and the respective tracking algorithm. Comparing the two methods in our simulations show that the proposed Kalman-based algorithm can significantly outperform the linear method, for both stationary and nonstationary fading processes, and especially for long-range predictions. The performance and the self-recovering structure, as well as the reasonable computational complexity, makes the algorithm appealing for practical applications.  相似文献   

14.
An approach to computing estimates of the ultrasound pulse spectrum from echo-ultrasound RF sequences, measured from biological tissues, is proposed. It is computed by a "projection" algorithm based on the Discrete Wavelet Transform (DWT) using averaging over a range of linear shifts. It is shown that the robust, shift invariant estimate of the ultrasound pulse power spectrum can be obtained by the projection of RF line log spectrum on an appropriately chosen subspace of L2(R) (i.e., the space of square-integrable functions) that is spanned by a redundant collection of compactly supported, scaling functions. This redundant set is formed from the traditional (in Wavelet analysis) orthogonal set of scaling functions and also by all its linear (discrete) shifts. A proof is given that the estimate, so obtained, could be viewed as the average of the orthogonal projections of the RF line log spectrum, computed for all significant linear shifts of the RF line log spectrum in frequency domain. It implies that the estimate is shift-invariant. A computationally efficient scheme is presented for calculating the estimate. Proof is given that the averaged, shift-invariant estimate can be obtained simply by a convolution with a kernel, which can be viewed as the discretized auto-correlation function of the scaling function, appropriate to the particular subspace being considered. It implies that the computational burden is at most O(n log2 n), where n is the problem size, making the estimate quite suitable for real-time processing. Because of the property of the wavelet transform to suppress polynomials of orders lower than the number of the vanishing moments of the wavelet used, the presented approach can be considered as a local polynomial fitting. This locality plays a crucial role in the performance of the algorithm, improving the robustness of the estimation. Moreover, it is shown that the "averaging" nature of the proposed estimation allows using (relatively) poorly regular wavelets (i.e., short filters), without affecting the estimation quality. The latter is of importance whenever the number of calculations is crucial.  相似文献   

15.
As parallel and distributed computing gradually becomes the computing standard for large scale problems, the domain decomposition method (DD) has received growing attention since it provides a natural basis for splitting a large problem into many small problems, which can be submitted to individual computing nodes and processed in a parallel fashion. This approach not only provides a method to solve large scale problems that are not solvable on a single computer by using direct sparse solvers but also gives a flexible solution to deal with large scale problems with localized non‐linearities. When some parts of the structure are modified, only the corresponding subdomains and the interface equation that connects all the subdomains need to be recomputed. In this paper, the dual–primal finite element tearing and interconnecting method (FETI‐DP) is carefully investigated, and a reduced back‐substitution (RBS) algorithm is proposed to accelerate the time‐consuming preconditioned conjugate gradient (PCG) iterations involved in the interface problems. Linear–non‐linear analysis (LNA) is also adopted for large scale problems with localized non‐linearities based on subdomain linear–non‐linear identification criteria. This combined approach is named as the FETI‐DP‐RBS‐LNA algorithm and demonstrated on the mechanical analyses of a welding problem. Serial CPU costs of this algorithm are measured at each solution stage and compared with that from the IBM Watson direct sparse solver and the FETI‐DP method. The results demonstrate the effectiveness of the proposed computational approach for simulating welding problems, which is representative of a large class of three‐dimensional large scale problems with localized non‐linearities. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
目的 提升质检过程中新材料地板的表面缺陷检测精度。方法 通过翻转、水平迁移和垂直迁移对采集到的缺陷图像进行扩充,构建新材料地板缺陷数据集。基于YOLOv5算法,增加一个预测头,使算法对微小缺陷更加敏感;其次在网络的特征融合层应用Swin Transformer模块,形成注意力机制预测头,提高网络特征提取效率;然后在网络主干末端加入SE模块,使网络提取有用的信息特征,提高模型精度。结果 实验结果表明,提出的新材料地板表面缺陷检测方法能够准确判别地板好坏,并能够识别出白色杂质、黑斑、边损、气泡胶等4类表面缺陷,各缺陷类型的平均精确均值为82.30%,比YOLOv5 Baseline提高了6.58%,相比其他典型目标检测算法也能够更准确和快速地识别地板表面缺陷。结论 通过改进的YOLOv5算法可以更准确地对地板表面缺陷进行分类与定位,从而大大提高工业质检效率。  相似文献   

17.
Using bilinear and quadratic forms for frequency estimation   总被引:1,自引:0,他引:1  
This paper introduces an innovative approach for frequency estimation. The proposed algorithm can be used to estimate both small and off-nominal frequency deviations. The computational effort is dramatically reduced relative to conventional methods. In addition, the method provides high measurement accuracy and is relatively insensitive to harmonic distortion  相似文献   

18.
A kernel-based algorithm is potentially very efficient for predicting key quality variables of nonlinear chemical and biological processes by mapping an original input space into a high-dimensional feature space. Nonlinear data structure in the original space is most likely to be linear at the high-dimensional feature space. In this work, kernel partial least squares (PLS) was applied to predict inferentially key process variables in an industrial cokes wastewater treatment plant. The primary motive was to give operators and process engineers a reliable and accurate estimation of key process variables such as chemical oxygen demand, total nitrogen, and cyanides concentrations in real time. This would allow them to arrive at the optimum operational strategy in an early stage and minimize damage to the operating units as shock loadings of toxic compounds in the influent often cause process instability. The proposed kernel-based algorithm could effectively capture the nonlinear relationship in the process variables and show far better performance in prediction of the quality variables compared to the conventional linear PLS and other nonlinear PLS method.  相似文献   

19.
李建明  杨挺  王惠栋 《包装工程》2020,41(7):175-184
目的针对目前工业自动化生产中基于人工特征提取的包装缺陷检测方法复杂、专业知识要求高、通用性差、在多目标和复杂背景下难以应用等问题,研究基于深度学习的实时包装缺陷检测方法。方法在样本数据较少的情况下,提出一种基于深度学习的Inception-V3图像分类算法和YOLO-V3目标检测算法相结合的缺陷检测方法,并设计完整的基于计算机视觉的在线包装缺陷检测系统。结果实验结果显示,该方法的识别准确率为99.49%,方差为0.0000506,只使用Inception-V3算法的准确率为97.70%,方差为0.000251。结论相比一般基于人工特征提取的包装缺陷检测方法,避免了复杂的特征提取过程。相比只应用图像分类算法进行包装缺陷检测,该方法在包装缺陷区域占比较小的情况下能较明显地提高包装缺陷检测精度和稳定性,在复杂检测背景和多目标场景中体现优势。该缺陷检测系统和检测方法可以很容易地迁移到其他类似在线检测问题上。  相似文献   

20.
A procedure for solving stochastic two-stage programming problems has been developed. The approach consists of genetic algorithm optimization with point estimate procedures. It has several advantages over traditional methods, such as evaluating function values only, no continuous or gradient requirements and it can solve integer or continuous problems. To improve the performance of the method, a modification of a standard genetic algorithm is suggested and coded. Point estimation methods are used to efficiently evaluate the second stage expected value objective function. Finally, the overall procedure is applied to several linear and nonlinear problems  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号