首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Statistical results of all the 1621 failure analyses, which cover a four‐year effort, are presented. The failure analyses were part of an elaborate test and ESS process of a high‐reliability system. Of the failure analyses, 97.5% identified the root cause. The classification of the failures by 11 root cause classes, including the test and ESS process itself is provided. The distribution of failure root causes detected during the various phases of the process is shown. Each test phase was analyzed according to the share of failures found and the fraction of failures that occurred during that phase. Applying a linear formula that summarizes the relative efficacy of each test phase compares the performances of the various phases. The term efficacy is used to describe the ability of a test phase to reveal failures without generating new problems. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
Increasing prevalence of human–robot systems in a variety of applications raises the question of how to design these systems to best leverage the capabilities of humans and robots. In this paper, we address the relationships between reliability, productivity, and risk to humans from human–robot systems operating in a hostile environment. Objectives for maximizing the effectiveness of a human–robot system are presented, which capture these coupled relationships, and reliability parameters are proposed to characterize unplanned interventions between a human and robot. The reliability metrics defined here take on an expanded meaning in which the underlying concept of failure in traditional reliability analysis is replaced by the notion of intervention. In the context of human–robotic systems, an intervention is not only driven by component failures, but includes many other factors that can make a robotic agent to request or a human agent to provide intervention, as we argue in this paper. The effect of unplanned interventions on the effectiveness of human–robot systems is then investigated analytically using traditional reliability analysis. Finally, we discuss the implications of these analytical trends on the design and evaluation of human–robot systems.  相似文献   

3.
This paper discusses two distinct benefits associated with product reliability qualifications by combining individual reliability stress tests into a single test. First, a combined test will make better use of test equipment, personnel and product samples required for the qualification, implying a cost benefit. Secondly, a combined test allows interaction between the stress sources to occur on all failure causes. These benefits were realized as a result of a power supply application. Each individual test was designed to address specific causes which are assumed to be totally independent of other failures. A combined test can verify or deny independence and therefore provide a stronger practical test. Hence, this paper will demonstrate that a combined reliability test will provide more efficient testing with solid engineering product performance coverage. In addition, an approach is proposed for combining tests which fixes the acceleration rates between stress tests as the same ratio as under the nominal operating product environmental conditions.  相似文献   

4.
Today's military operating environments are more operationally diverse and technically challenging. Fielding relevant weapons systems to meet the demands of this environment is increasingly difficult, prompting policy shifts that mandate a focus on systems capable of combating a wide threat range. The capabilities‐based test and evaluation construct is the Department of the Navy's effort to concentrate on integrated system design with the objective of satisfying a particular operational response (capability) under a robust range of operating conditions. One aspect of capabilities‐based test and evaluation is the increased employment of advanced mathematical and statistical techniques in the test and evaluation (T&E) process. This case study illustrates advantages of incorporating these invaluable techniques, like design of experiments and modeling and simulation, within the T&E process. We found through statistical analysis that the application of design of experiment concepts to the System Under Test throughout three primary phases of T&E quantifiably improved the accomplishment of the selected response variable of interest. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
We propose an integrated methodology for the reliability and dynamic performance analysis of fault-tolerant systems. This methodology uses a behavioral model of the system dynamics, similar to the ones used by control engineers to design the control system, but also incorporates artifacts to model the failure behavior of each component. These artifacts include component failure modes (and associated failure rates) and how those failure modes affect the dynamic behavior of the component. The methodology bases the system evaluation on the analysis of the dynamics of the different configurations the system can reach after component failures occur. For each of the possible system configurations, a performance evaluation of its dynamic behavior is carried out to check whether its properties, e.g., accuracy, overshoot, or settling time, which are called performance metrics, meet system requirements. Markov chains are used to model the stochastic process associated with the different configurations that a system can adopt when failures occur. This methodology not only enables an integrated framework for evaluating dynamic performance and reliability of fault-tolerant systems, but also enables a method for guiding the system design process, and further optimization. To illustrate the methodology, we present a case-study of a lateral-directional flight control system for a fighter aircraft.  相似文献   

6.
The optimisation of product infant failure rate is the most important and difficult task for continuous improvement in manufacturing; how to model the infant failure rate promptly and accurately of the complex electromechanical product in manufacturing is always a dilemma for manufacturers. Traditional methods of reliability analysis for the produced product usually rely on limited test data or field failures, the valuable information of quality variations from the manufacturing process has not been fully utilised. In this paper, a multilayered model structured by ‘part-level, component-level, system-level’ is presented to model the reliability in the form of infant failure rate by quantifying holistic quality variations from manufacturing process for electromechanical products. The mechanism through which the multilayered quality variations affect the infant failure rate is modelled analytically with a positive correlation structure. Furthermore, an integrated failure rate index is derived to model the reliability of electromechanical product in manufacturing by synthetically incorporating overall quality variations with Weibull distribution. A case study on a control board suffering from infant failures in batch production is performed. Results show that the proposed approach could be effective in assessing the infant failure rate and in diagnosing the effectiveness of quality control in manufacturing.  相似文献   

7.
A reliability model is presented which may serve as a tool for identification of cost-effective configurations and operating philosophies of computer-based process safety systems. The main merit of the model is the explicit relationship in the mathematical formulas between failure cause and the means used to improve system reliability such as self-test, redundancy, preventive maintenance and corrective maintenance. A component failure taxonomy has been developed which allows the analyst to treat hardware failures, human failures, and software failures of automatic systems in an integrated manner. Furthermore, the taxonomy distinguishes between failures due to excessive environmental stresses and failures initiated by humans during engineering and operation. Attention has been given to develop a transparent model which provides predictions which are in good agreement with observed system performance, and which is applicable for non-experts in the field of reliability.  相似文献   

8.
In many real systems, failures occurring to the components, control failures and human interventions often interact with the physical system evolution in such a way that a simple reliability analysis, de-coupled from process dynamics, is very difficult or even impossible. In the last ten years many dynamic reliability approaches have been proposed to properly assess the reliability of these systems characterized by dynamic interactions. The DYLAM methodology, now implemented in its latest version, DYLAM-3, offers a powerful tool for integrating deterministic and failure events. This paper describes the main features of the DYLAM-3 code with reference to the classic fault-tree and event-tree techniques. Some aspects connected to the practical problems underlying dynamic event-trees are also discussed. A simple system, already analyzed with other dynamic methods is used as a reference for the numerical applications. The same system is also studied with a time-dependent fault-tree approach in order to show some features of dynamic methods vs classical techniques. Examples including stochastic failures, without and with repair, failures on demand and time dependent failure rates give an extensive overview of DYLAM-3 capabilities.  相似文献   

9.
Current reliability based approaches to structural design are typically element based: they commonly include uncertainties in the structural resistance, applied loads and geometric parameters, and in some cases in the idealized structural model. Nevertheless, the true measure of safety is the structural systems reliability which must consider multiple failure paths, load sharing and load redistribution after member failures, and is beyond the domain of element reliability analysis. Identification of system failure is often subjective, and a crisp definition of system failure arises naturally only in a few idealized instances equally important. We analyse the multi-girder steel highway bridge as a k out of n active parallel system. System failure is defined as gross inelastic deformation of the bridge deck; the subjectivity in the failure criterion is accounted for by generalizing k as a random variable. Randomness in k arises from a non-unique relation between number of failed girders and maximum deflection and from randomness in the definition of the failure deflection. We show how uncertain failure criteria and structural systems analyses can be decoupled. Randomness in the transverse location of trucks is considered and elastic perfectly plastic material response is assumed. The role of the system factor modifying the element-reliability based design equation to achieve a target system reliability is also demonstrated.  相似文献   

10.
This paper summarizes the results of a failure analysis investigation of a fractured main support bridge from an army helicopter. The part, manufactured by “Contractor IT,” failed component fatigue testing while those of the original equipment manufacturer (OEM) passed. Even though the same technical data package was used by both manufacturers and there were no material discrepancies found, a great disparity existed in the fatigue test data. This has been a recurring problem within the Army and the intent of this paper is to provide some insight as to the technical reasons why this can occur. Emphasis will be placed on the effects of manufacturing processes on fatigue. Other failure analyses will be discussed in relationship to this topic. Objective: To perform a metallurgical examination comparing components fabricated by “Contractor IT” to those of the OEM, with the intent of determining the disparity in fatigue life.  相似文献   

11.
Optimization of system reliability in the presence of common cause failures   总被引:1,自引:0,他引:1  
The redundancy allocation problem is formulated with the objective of maximizing system reliability in the presence of common cause failures. These types of failures can be described as events that lead to simultaneous failure of multiple components due to a common cause. When common cause failures are considered, component failure times are not independent. This new problem formulation offers several distinct benefits compared to traditional formulations of the redundancy allocation problem. For some systems, recognition of common cause failure events is critical so that the overall system reliability estimation and associated design resembles the true system reliability behavior realistically. Since common cause failure events may vary from one system to another, three different interpretations of the reliability estimation problem are presented. This is the first time that mixing of components together with the inclusion of common cause failure events has been addressed in the redundancy allocation problem. Three non-linear optimization models are presented. Solutions to three different problem types are obtained. They support the position that consideration of common cause failures will lead to different and preferred “optimal” design strategies.  相似文献   

12.
G P SRIVASTAVA 《Sadhana》2013,38(5):897-924
This paper presents an overview of state-of-the art developments in electronics for nuclear power programme of India. Indigenous activities in instrumentation and control (I&C) in the areas of detector development, nuclear instrumentation, monitoring and control electronics and special sensors paved the way to self-reliance in nuclear industry. Notable among the recent I&C systems developed for 540 MWe reactors are Liquid Zone Control System (LZCS), flux mapping system and advance reactor regulating system. In a nuclear plant, apart from ensuring functional requirements, design of electronics needs to meet high level of reliability, safety and security standards. Therefore, a lot of importance is attached to activities such as design review, testing, operation, maintenance and qualifications of I&C systems. Induction of computer based I&C systems mandated a rigorous verification process commensurate with the safety class of the system as specified in Atomic Energy Regulatory Board (AERB) safety guides. Software reliability is assured by following strict development life cycle combined with zero-defect policy and is verified through verification and validation (V&V) process. Development of new techniques in data transmissions with optical fibres as transmission medium and wireless networks in control systems is being pursued. With new I&C systems, efforts were made to utilize the same hardware and software platforms for various plant applications, i.e., for standardization. Thrust was given to use Field Programmable Gate Arrays (FPGA) and Application Specific Integrated Circuits (ASIC) in order to improve the reliability of system by reducing component count. It has become imperative to develop modern contemporary solutions like ASICs, HMCs, System on Chip (SOC) and detector mounted electronics and towards that various ASICs and HMCs have been developed in-house to meet the challenges.  相似文献   

13.
The failures of complex systems always arise from different causes in reliability test. However, it is difficult to evaluate the failure effect caused by a specific cause in presence of other causes. Therefore, a generalize reliability analysis model, which takes into account of the multiple competing causes, is highly needed. This paper develops a statistical reliability analysis procedure to investigate the reliability characteristics of multiple failure causes under independent competing risks. We mainly consider the case when the lifetime data follow log‐location‐scale distributions and may also be right‐censored. Maximum likelihood (ML) estimators of unknown parameters are derived by applying the Newton–Raphson method. With the large‐sample assumption, the normal approximation of the ML estimators is used to construct the asymptotic confidence intervals in which the standard error of the variance‐covariance matrix is calculated by using the delta method. In particular, the Akaike information criterion is utilized to determine the appropriate fitted distribution for each cause of failure. An illustrative numerical experiment about the fuel cell engine (FCE) is presented to demonstrate the feasibility and effectiveness of the proposed model. The results can facilitate continued advancement in reliability prediction and reliability allocation for FCE, and also provide theoretical basis for the application of reliability concepts to many other complex systems. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, we presented a continuous‐time Markov process‐based model for evaluating time‐dependent reliability indices of multi‐state degraded systems, particularly for some automotive subsystems and components subject to minimal repairs and negative repair effects. The minimal repair policy, which restores the system back to an “as bad as old” functioning state just before failure, is widely used for automotive systems repair because of its low cost of maintenance. The current study distinguishes with others that the negative repair effects, such as unpredictable human error during repair work and negative effects caused by propagated failures, are considered in the model. The negative repair effects may transfer the system to a degraded operational state that is worse than before due to an imperfect repair. Additionally, a special condition that a system under repair may be directly transferred to a complete failure state is also considered. Using the continuous‐time Markov process approach, we obtained the general solutions to the time‐dependent probabilities of each system state. Moreover, we also provided the expressions for several reliability measures include availability, unavailability, reliability, mean life time, and mean time to first failure. An illustrative numerical example of reliability assessment of an electric car battery system is provided. Finally, we use the proposed multi‐state system model to model a vehicle sub‐frame fatigue degradation process. The proposed model can be applied for many practical systems, especially for the systems that are designed with finite service life.  相似文献   

15.
New repairable systems are generally subjected to development programs in order to improve system reliability before starting mass production. This paper proposes a Bayesian approach to analyze failure data from repairable systems undergoing a Test-Find-Test program. The system failure process in each testing stage is modeled using a Power-Law Process (PLP). Information on the effect of design modifications introduced into the system before starting a new testing stage is used, together with the posterior density of the PLP parameters at the current stage, to formalize the prior density at the beginning of the new stage. Contrary to the usual assumption, in this paper the PLP parameters are assumed to be dependent random variables. The system reliability is measured in terms of the number of failures that will occur in a batch of new units in a given time interval, for example the warranty period. A numerical example is presented to illustrate the proposed procedure.  相似文献   

16.
Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure (“renewal approach”); and the use of stochastic point processes (“repairable systems approach”), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.  相似文献   

17.
Software reliability growth models, which are based on nonhomogeneous Poisson processes, are widely adopted tools when describing the stochastic failure behavior and measuring the reliability growth in software systems. Faults in the systems, which eventually cause the failures, are usually connected with each other in complicated ways. Considering a group of networked faults, we raise a new model to examine the reliability of software systems and assess the model's performance from real‐world data sets. Our numerical studies show that the new model, capturing networking effects among faults, well fits the failure data. We also formally study the optimal software release policy using the multi‐attribute utility theory (MAUT), considering both the reliability attribute and the cost attribute. We find that, if the networking effects among different layers of faults were ignored by the software testing team, the best time to release the software package to the market would be much later while the utility reaches its maximum. Sensitivity analysis is further delivered.  相似文献   

18.
This paper describes a method for estimating and forecasting reliability from attribute data, using the binomial model, when reliability requirements are very high and test data are limited. Integer data—specifically, numbers of failures — are converted into non-integer data. The rationale is that when engineering corrective action for a failure is implemented, the probability of recurrence of that failure is reduced; therefore, such failures should not be carried as full failures in subsequent reliability estimates. The reduced failure value for each failure mode is the upper limit on the probability of failure based on the number of successes after engineering corrective action has been implemented. Each failure value is less than one and diminishes as test programme successes continue. These numbers replace the integral numbers (of failures) in the binomial estimate. This method of reliability estimation was applied to attribute data from the life history of a previously tested system, and a reliability growth equation was fitted. It was then ‘calibrated’ for a current similar system's ultimate reliability requirements to provide a model for reliability growth over its entire life-cycle. By comparing current estimates of reliability with the expected value computed from the model, the forecast was obtained by extrapolation.  相似文献   

19.
Repairable systems have reliability (failure) and maintainability (restoration) processes that tend to improve or deteriorate over time depending on life-cycle phase. External variables (covariates) can explain differences in event rates and thus provide valuable information for engineering analysis and design. In some cases, the processes may be modeled by a parametric non-homogeneous Poisson process (NHPP) with proportional intensity function, incorporating the covariates. However, the true underlying process may not be known, in which case a distribution-free or semi-parametric model may be very useful. The Prentice, Williams and Peterson (PWP) family of proportional intensity models has been proposed for application to repairable systems. This paper reports results of a study on the robustness of one PWP reliability model over early failure history. The assessment of robustness was based on the semi-parametric PWP model's ability to predict the successive times of occurrence of events when the underlying process actually is parametric (specifically a NHPP having log-linear proportional intensity function with one covariate). A parametric method was also used to obtain maximum likelihood estimates of the log-linear parameters, for purposes of validation and as a reference for comparison. The PWP method provided accurate estimates of the time to next event for NHPP log-linear processes with moderately increasing rates of occurrence of events. Potential engineering applications to repairable systems, with increasing rates of event occurrence, include reliability (failure) processes in the wear-out phase and maintainability (restoration) processes in the learning phase. A real example of a maintainability (restoration) process (log-linear NHPP with two explanatory covariates) for US Army M1A2 Abrams Main Battle Tank serves to demonstrate the engineering relevance of the methods evaluated in this research.  相似文献   

20.
This paper presents a case study that was initaated by excessive monitor failures occurring during a simulated early-life test. Statistical analysis of the failure data suggested that there should be no longterm resistor failure problem. Large increases in resistance of some metal film resistors led to many of the monitor failures. Corrosion of the resistive film by residual chlorine for a particular resistor vendor's cleaning process was responsible. The results led to process changes to reduce the contamination and the reliability testing of the ‘new’ product revealed that the process changes were successful. A statistical designed experiment indicated that the ‘old’ resistors were not all degraded in the same manner. As a result of this experiment, the failures were attributed to poor process controls.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号