首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
R. Sliž  M.‐Y. Chang 《Strain》2009,45(6):498-505
Abstract: Photoelasticity has become a modern tool of stress analysis which is capable of competing with other tools employed currently, including finite element analysis. Improved model production and automated fringe analysis allow us to perform investigations of complex models, speeding up the rate of analysis and reducing the action by users, consequently automating the whole process. However, before automated fringe analysis, the mask of the model should be extracted. The authors discuss the development of a new algorithm to detect the mask of the model by analysing isochromatic fringe patterns used in photoelasticity. It is important to know the mask of the model for its analysis and to obtain a stress map. Unlike the available edge algorithms or any other techniques used to detect a model's mask, the proposed algorithm was developed to minimise user action, allowing the process to be automated. There is a major difference between the area of the background and area of the model from the point of view of image processing. Grey level of points inside the background region are distributed along the tilted plane with low total variance, and those points inside the model regions are distributed along the isochromatic fringes having the shape of a wave. The variance of certain areas is measured with respect to the approximated plane created over such area from the grey level of each point. Areas having low variance are then selected and extended to true boundaries based on the fact that edges are characterised by a huge jump in the grey level. The proposed method is validated experimentally for a plate with multiple cutouts in a dark field and a circular disc under diametric compressive load with frozen stress in white field.  相似文献   

2.
This paper illustrates the process monitoring strategy for a multistage manufacturing facility with the aid of cluster analysis and multiple multi-block partial least squares (MBPLS) models. Traditionally, a single MBPLS model is used for monitoring multiple process and quality characteristics. However, modelling all the responses together in a single model may cause poor model fit in the events of: (i) uncorrelated response variables; and (ii) groups of response variables having high correlation amongst the variables within a group but no or negligible correlations between the groups. This paper overcomes this problem by combining cluster analysis with MBPLS through development of multiple MBPLS models. Each of the MBPLS models is used to detect out-of-control observations and a superset of the out-of-control observations is created. Two new fault diagnostic statistics for stage-wise and variable-wise contribution are developed for the superset. The developed methodology is applied to a steel making shop for monitoring. The case study results show that the proposed methodology performs better as compared to the traditionally employed single MBPLS model.  相似文献   

3.
New method to characterize a machining system: application in turning   总被引:1,自引:0,他引:1  
Many studies simulates the machining process by using a single degree of freedom spring-mass system to model the tool stiffness, or the workpiece stiffness, or the unit tool-workpiece stiffness in modelings 2D. Others impose the tool action, or use more or less complex modelings of the efforts applied by the tool taking account the tool geometry. Thus, all these models remain two-dimensional or sometimes partially three-dimensional. This paper aims at developing an experimental method allowing to determine accurately the real three-dimensional behaviour of a machining system (machine tool, cutting tool, tool-holder and associated system of force metrology six-component dynamometer). In the work-space model of machining, a new experimental procedure is implemented to determine the machining system elastic behaviour. An experimental study of machining system is presented. We propose a machining system static characterization. A decomposition in two distinct blocks of the system “Workpiece-Tool-Machine” is realized. The block Tool and the block Workpiece are studied and characterized separately by matrix stiffness and displacement (three translations and three rotations). The Castigliano’s theory allows us to calculate the total stiffness matrix and the total displacement matrix. A stiffness center point and a plan of tool tip static displacement are presented in agreement with the turning machining dynamic model and especially during the self induced vibration. These results are necessary to have a good three-dimensional machining system dynamic characterization (presented in a next paper).  相似文献   

4.
An illustration of the operational consistency of the upstream part of a biopharmaceutical process is given. For this purpose four batch cultivations of Bordetella pertussis have been executed under identical conditions. The batches have been monitored by means of two fundamentally different process sensors. First, common single channel measurements such as temperature, pH, dissolved oxygen (DO), and flow rates are used and second, the multichannel measurements from the NIR (Near Infrared) analyzer. Because of the fundamental differences between the two types of measurements, two models have been developed to evaluate the operational consistency. The last sensor studied is a typical representative of process analyzers which are described in the PAT (Process Analytical Technology) guidance document issued in 2004 by the American Food and Drug Administration (FDA). Data from both sensors have been evaluated by a multivariate tool for data acquisition. This resulted in two different performance models. Again this approach is characteristic for the implementation of PAT for the manufacture of biopharmaceuticals.

With both performance models, we were able to explore the operational consistency of the batches. In addition, the performance models were also able to detect a deviating batch. Further, it was shown that both sensor types gave partly overlapping information since a deviation in the batch profiles of the logged process variables was accompanied by a deviation in the spectral batch profiles.

The performance models are valuable tools in developing advanced monitoring and control systems for biopharmaceutical processes. Using such models, advanced knowledge based systems can be developed to detect abnormal situations in an early stage and remove the cause.

The procedure of data processing described in this article is relatively new in the biopharmaceutical industry. The NIR analyzer and both performance models presented in this article are clear ingredients for better process understanding and process control, as intended in the FDA's PAT Initiative. This initiative is part of the FDA's strategy of cGMP (current good manufacturing practice) for the 21st century and aims at introducing innovations in both the manufacturing of biopharmaceuticals and the development of new biopharmaceuticals.

This study shows the feasibility of two typical PAT tools for controlling the manufacturing of biopharmaceuticals. To the best of our knowledge such feasibility study is not documented up to now in the scientific literature.  相似文献   

5.
Abstract:  The correct modelling of constitutive laws is of critical importance for the analysis of mechanical behaviour of solids and structures. For example, the understanding of soft tissue mechanics, because of the nonlinear behaviour commonly displayed by the mechanical properties of such materials, makes common place the use of hyperelastic constitutive models. Hyperelastic models however, depend on sets of variables that must be obtained experimentally. In this study the authors use a computational/experimental scheme, for the study of the nonlinear mechanical behaviour of biological soft tissues under uniaxial tension. The material constants for seven different hyperelastic material models are obtained via inverse methods. The use of Martins's model to fit experimental data is presented in this paper for the first time. The search for an optimal value for each set of material parameters is performed by a Levenberg–Marquardt algorithm. As a control measure, the process is fully applied to silicone-rubber samples subjected to uniaxial tension tests. The fitting accuracy of the experimental stress–strain relation to the theoretical one, for both soft tissues and silicone-rubber (typically nonlinear) is evaluated. This study intents also to select which material models (or model types), the authors will employ in future works, for the analysis of human soft biological tissues.  相似文献   

6.
Generalized additive models are an effective regression tool, popular in the statistics literature, that provides an automatic extension of traditional linear models to nonlinear systems. We present a distributed algorithm for fitting generalized additive models, based on the alternating direction method of multipliers (ADMM). In our algorithm the component functions of the model are fit independently, in parallel; a simple iteration yields convergence to the optimal generalized additive model. This is in contrast to the traditional approach of backfitting, where the component functions are fit sequentially. We illustrate the method on different classes of problems such as generalized additive, logistic, and piecewise constant models, with various types of regularization, including those that promote smoothness and sparsity.  相似文献   

7.
In structural reliability, special attention is devoted to model distribution tails. The distributions are required to fit the upper observations and provide a picture of the tail above the maximal observation. Goodness-of-fit tests can be constructed to check this tail fit. Then what can we do with distributions having a good central fit and a bad extremal fit? We propose a regularization procedure. It is based on Bayesian tools and takes into account the opinion of experts. Predictive distributions are proposed as model distributions. We numerically investigate this method on normal, lognormal, exponential, gamma and Weibull distributions.  相似文献   

8.
It is important to examine the nature of the relationships between roadway, environmental, and traffic factors and motor vehicle crashes, with the aim to improve the collective understanding of causal mechanisms involved in crashes and to better predict their occurrence. Statistical models of motor vehicle crashes are one path of inquiry often used to gain these initial insights. Recent efforts have focused on the estimation of negative binomial and Poisson regression models (and related deviants) due to their relatively good fit to crash data. Of course analysts constantly seek methods that offer greater consistency with the data generating mechanism (motor vehicle crashes in this case), provide better statistical fit, and provide insight into data structure that was previously unavailable. One such opportunity exists with some types of crash data, in particular crash-level data that are collected across roadway segments, intersections, etc. It is argued in this paper that some crash data possess hierarchical structure that has not routinely been exploited. This paper describes the application of binomial multilevel models of crash types using 548 motor vehicle crashes collected from 91 two-lane rural intersections in the state of Georgia. Crash prediction models are estimated for angle, rear-end, and sideswipe (both same direction and opposite direction) crashes. The contributions of the paper are the realization of hierarchical data structure and the application of a theoretically appealing and suitable analysis approach for multilevel data, yielding insights into intersection-related crashes by crash type.  相似文献   

9.
This article describes the novel stochastic modeling tool OpenSESAME which allows for a quantitative evaluation of fault-tolerant High-Availability systems. The input models are traditional reliability block diagrams (RBDs) which can be enriched with inter-component dependencies like failure propagation, failures with a common cause, different redundancy types, and non-dedicated repair. OpenSESAME offers a novel set of graphical diagrams to specify these dependencies. Due to the dependencies, traditional solution methods for RBDs cannot be applied to OpenSESAME models. We therefore present a novel evaluation method, which is based on the automatic generation of several state-based models, which are semantically equivalent to the high-level input model. Alternatively, either stochastic Petri nets or textual models based on a stochastic process algebra can be generated. The state-based models are then analyzed using existing solvers for these types of models. Three case studies exemplify the modeling power and usability of OpenSESAME.  相似文献   

10.
Deterministic simulation is a popular tool used to numerically solve complex mathematical models in engineering applications. These models often involve parameters in the form of numerical values that can be calibrated when real‐life observations are available. This paper presents a systematic approach in parameter calibration using Response Surface Methodology (RSM). Additional modeling by considering correlation in error structure is suggested to compensate the inadequacy of the computer model and improve prediction at untried points. Computational Fluid Dynamics (CFD) model for manure storage ventilation is used for illustration. A simulation study shows that in comparison to likelihood‐based parameter calibration, the proposed parameter calibration method performs better in accuracy and consistency of the calibrated parameter value. The result from sensitivity analysis leads to a guideline in setting up factorial distance in relation to initial parameter values. The proposed calibration method extends RSM beyond its conventional use of process yield improvement and can also be applied widely to calibrate other types of models when real‐life observations are available. Moreover, the proposed inadequacy modeling is useful to improve the accuracy of simulation output, especially when a computer model is too expensive to run at its finest level of detail. Copyright © 2011 John Wiley and Sons Ltd.  相似文献   

11.
The effects of assigning inaccurate reference lifetimes in lifetime determinations are predicted theoretically by using standard equations. This theory leads to a method to remove reference error effects using common least-squares software. This method cannot, however, be used to deconvolute data collected with isochronal references. Uncorrected data can always be exactly solved with models containing one more degree of freedom than the true model. Monoexponential decays are fit by double-exponential decays or excited-state processes. Unimodal distributed decays often appear as discrete, double-exponential decays.  相似文献   

12.
Optimal design applications are often modeled by using categorical variables to express discrete design decisions, such as material types. A disadvantage of using categorical variables is the lack of continuous relaxations, which precludes the use of modern integer programming techniques. We show how to express categorical variables with standard integer modeling techniques, and we illustrate this approach on a load-bearing thermal insulation system. The system consists of a number of insulators of different materials and intercepts that minimize the heat flow from a hot surface to a cold surface. Our new model allows us to employ black-box modeling languages and solvers and illustrates the interplay between integer and nonlinear modeling techniques. We present numerical experience that illustrates the advantage of the standard integer model.  相似文献   

13.
Quality improvement efforts often make use of various mathematical models that describe the relationships between quality characteristics and process factors. Such models typically come from a variety of sources: experiments, theory, on-line data analysis, expertise, and other process documents. These sources of knowledge are often distinct and separate, often yielding models with slightly different predictions, having different precision and validity. In this paper we explore alternatives in which different mathematical models can be integrated together into a single prediction that takes into account both model validity and model variability. Some guidelines for establishing and quantifying model validity are presented. The approach is demonstrated within the context: of predicting surface finish in a machining process.  相似文献   

14.
《Quality Engineering》2007,19(4):311-325
In modern manufacturing processes, massive amounts of multivariate data are routinely collected through automated in-process sensing. These data often exhibit high correlation, rank deficiency, low signal-to-noise ratio and missing values. Conventional univariate and multivariate statistical process control techniques are not suitable to be used in these environments. This article discusses these issues and advocates the use of multivariate statistical process control based on principal component analysis (MSPC-PCA) as an efficient statistical tool for process understanding, monitoring and diagnosing assignable causes for special events in these contexts. Data from an autobody assembly process are used to illustrate the practical benefits of using MSPC-PCA rather than conventional SPC in manufacturing processes.  相似文献   

15.
The process of electrodeposition can be described in terms of a reaction-diffusion partial differential equation (PDE) system that models the dynamics of the morphology profile and the chemical composition. Here we fit such a model to the different patterns present in a range of electrodeposited and electrochemically modified alloys using PDE constrained optimization. Experiments with simulated data show how the parameter space of the model can be divided into zones corresponding to the different physical patterns by examining the structure of an appropriate cost function. We then use real data to demonstrate how numerical optimization of the cost function can allow the model to fit the rich variety of patterns arising in experiments. The computational technique developed provides a potential tool for tuning experimental parameters to produce desired patterns.  相似文献   

16.
Second‐order experimental designs are employed when an experimenter wishes to fit a second‐order model to account for response curvature over the region of interest. Partition designs are utilized when the output quality or performance characteristics of a product depend not only on the effect of the factors in the current process, but the effects of factors from preceding processes. Standard experimental design methods are often difficult to apply to several sequential processes. We present an approach to building second‐order response models for sequential processes with several design factors and multiple responses. The proposed design expands current experimental designs to incorporate two processes into one partitioned design. Potential advantages include a reduction in the time required to execute the experiment, a decrease in the number of experimental runs, and improved understanding of the process variables and their influence on the responses. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
In this study, various probabilistic models were considered to support fatigue strength design guidance in the ultra high-cycle regime (beyond 108 cycles), with particular application to Ti-6Al-4V, a titanium alloy common to aerospace applications. The random fatigue limit model of Pascual and Meeker and two proposed simplified models (bilinear and hyperbolic) used maximum likelihood estimation techniques to fit probabilistic stress-life curves to experimental data. The bilinear and hyperbolic models provided a good fit to large-sample experimental data for dual-phase Ti-6Al-4V and were then applied to a small-sample data set for a beta annealed variant of this alloy, providing an initial probabilistic estimate of beta annealed Ti-6Al-4V fatigue strength in the gigacycle regime. The bilinear and hyperbolic models are recommended for use in estimating probabilistic fatigue strength parameters in support of very high-cycle design criteria for metals with clearly defined fatigue limits and fairly constant scatter in fatigue strength.  相似文献   

18.
Thermomechanically processed steels are materials of great mechanical properties connected with more than good weldability. This mixture makes them interesting for different types of industrial applications. When creating welded joints, a specified amount of heat is introduced into the welding area and a so called heat-affected zone (HAZ) is formed. The key issue is to reduce the width of the HAZ, because properties of the material in the HAZ are worse than in the base material. In the paper, thermographic measurements of HAZ temperatures were presented as a potential tool for quality assuring the welding process in terms of monitoring and control. The main issue solved was the precise temperature measurement in terms of varying emissivity during a welding thermal cycle. A model of emissivity changes was elaborated and successfully applied. Additionally, material in the HAZ was tested to reveal its properties and connect changes of those properties with heating parameters. The obtained results prove that correctly modeled emissivity allows measurement of temperature, which is a valuable tool for welding process monitoring.  相似文献   

19.
This study primarily investigated the forecasting of the growth trend in renewable energy consumption in China. Only 22 samples were acquired for this study because renewable energy is an emerging technology. Because historical data regarding renewable energy were limited in sample size and the data were not normally distributed, forecasting methods used for analyzing large amounts of data were unsuitable for this study. Grey system theory is applied to system models involving incomplete information, unclear behavioral patterns, and unclear operating mechanisms. In addition, it can be used to perform comprehensive analyses, observe developments and changes in systems, and conduct long-term forecasts. The most prominent feature of this theory is that a minimum of only four data sets are required for establishing a model and that making stringent assumptions regarding the distribution of the sample population is not required. However, to address the limitations of previous studies on grey forecasting and to enhance the forecasting accuracy, this study adopted the grey model (1, 1) [GM(1, 1)] and the nonlinear grey Bernoulli model (1, 1) [(NGBM)] for theoretical derivation and verification. Subsequently, the two models were compared with a regression analysis model to determine the models’ predictive accuracy and goodness of fit. According to the indexes of mean absolute error, mean square error, and mean absolute percentage error, NGBM(1, 1) exhibited the most accurate forecasts, followed by GM(1, 1) and regression analysis model. The results indicated that the modified NGBM(1, 1) grey forecasting models demonstrated superior predictive abilities among the compared models.  相似文献   

20.
In many engineering optimization problems, the number of function evaluations is often very limited because of the computational cost to run one high-fidelity numerical simulation. Using a classic optimization algorithm, such as a derivative-based algorithm or an evolutionary algorithm, directly on a computational model is not suitable in this case. A common approach to addressing this challenge is to use black-box surrogate modelling techniques. The most popular surrogate-based optimization algorithm is the efficient global optimization (EGO) algorithm, which is an iterative sampling algorithm that adds one (or many) point(s) per iteration. This algorithm is often based on an infill sampling criterion, called expected improvement, which represents a trade-off between promising and uncertain areas. Many studies have shown the efficiency of EGO, particularly when the number of input variables is relatively low. However, its performance on high-dimensional problems is still poor since the Kriging models used are time-consuming to build. To deal with this issue, this article introduces a surrogate-based optimization method that is suited to high-dimensional problems. The method first uses the ‘locating the regional extreme’ criterion, which incorporates minimizing the surrogate model while also maximizing the expected improvement criterion. Then, it replaces the Kriging models by the KPLS(+K) models (Kriging combined with the partial least squares method), which are more suitable for high-dimensional problems. Finally, the proposed approach is validated by a comparison with alternative methods existing in the literature on some analytical functions and on 12-dimensional and 50-dimensional instances of the benchmark automotive problem ‘MOPTA08’.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号