首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 404 毫秒
1.
One of the most controversial techniques in the field of reliability is reliability-prediction methods based on component constant-failure-rate data for the estimation of system failure rates. This paper investigates a new reliability-estimation method that does not depend upon constant failure rates. Many boards were selected from the Loughborough University field-reliability database, and their reliability was estimated using failure-intensity based methods and then compared with the actual failure intensity observed in the field. The predicted failure-intensity closely agrees with the observed value for the majority of a system operating lifetimes. The general failure intensity method lends itself very easily to system-reliability prediction. It appears to give an estimate of the system-reliability throughout the operating lifetime of the equipment and does not make assumptions, such as constant failure rate, which can be detrimental to the validity of the estimate. The predictions seem, on present evidence, to track the observed behavior well, given the uncertainties that are evident in the field. The failure intensity method should be investigated further to see if it is feasible to estimate the system reliability throughout its lifetime and hence provide a more realistic picture of the way in which electronic systems behave in the field  相似文献   

2.
It has been observed that the decimal parts of failure-rate and MTTF values as listed in tables tend to have a logarithmic distribution. A possible explanation for this phenomenon is given. When such tables have been generated they should be examined to see if the anticipated distribution is present; should that not be the case, a systematic error might well be present. Such testing is one practical application of the observation. The decimal-values from long lists of data quite often tend to have a logarithmic distribution as pointed out by Newcomb and Benford. The phenomenon may be explained in several different ways, depending upon the nature of the data. References to their papers and those by other authors are given. The reader may turn to tables in his own field of interest; such tables will in all likelihood also show the same regularity.  相似文献   

3.
A simple and efficient technique for solving integer-programming problems that normally arise in system-reliability design is introduced. It quickly solves even a very large system problem. It consists of a systematic search near the boundary of constraints and involves functional evaluations only. It can handle system-reliability design problems of any type in which the decision variables are restricted to integer values. Several illustrative examples are given to substantiate these assertions  相似文献   

4.
Novel concepts for reliability technology   总被引:1,自引:0,他引:1  
We begin by examining the diverse connotations of the term quality. The desirable shape traced by the failure rate of the entire life of a good product, which might be called a hockey-stick line rather than a bathtub curve, is introduced. From the hockey-stick line and the definition of reliability, we extract two measurements. The terms r-reliability (failure rate) and durability (product life) are then explained. The conceptual analysis of failure mechanics explains that reliability technology pertains to the design area.The desirable shape of the failure-rate curve of electronic items, the hockey-stick line, clarifies that mean time to failure (MTTF) as the inverse of failure rate can be regarded as nominal life. We then discuss the BX life which is different from the MTTF, and reliability relationships between components and the set product. We recommend reshaped definitions of r-reliability and durability.We clarify the procedure to improve reliability and to identify the failure mode in order to find for right solutions and recommend a generalized life-stress failure model.  相似文献   

5.
This method prioritizes system-reliability prediction activities once a preliminary reliability-prediction has been made. System-reliability predictions often use data and models from a variety of sources, each with differing degrees of estimation uncertainty. Since time and budgetary constraints limit the extent of analyzes and testing needed to estimate component reliability, it is necessary to allocate limited resources intelligently. A reliability-prediction prioritization index (RPPI) is defined to provide a relative ranking of components based on their potential for improving the accuracy of a system-level reliability prediction by decreasing the variance of the system-reliability estimate. If a component has a high RPPI, then additional testing or analysis should be considered to decrease the variance of the component reliability estimate. RPPI is based on a decomposition of the variance of the system-reliability or on a mean-time-to-failure estimate. Using these indexes, the effect of individual components within the system can be compared, ranked, and assigned to priority groups. The ranking is based on whether a decrease of the component-reliability estimate variance meaningfully decreases the system-reliability estimate variance. The procedure is demonstrated with two examples  相似文献   

6.
For most complex stochastic systems such as microelectronic systems, the Mean Time To Failure (MTTF) is not available in analytical form. We resort to Monte-Carlo Simulation (MCS) to estimate MTTF function for some specific values of underlying density function parameter(s). MCS models, although simpler than a real-world system, are still a very complex way of relating input parameter(s) to MTTF. This study develops a polynomial model to be used as an auxiliary to a MCS model. The estimated metamodel is a Taylor expansion of the MTTF function in the neighborhood of the nominal value for the parameter(s). The Score Function methods estimate the needed sensitivities (i.e. gradient, Hessian, etc.) of the MTTF function with respect to the input parameter in a single simulation run. The explicit use of this metamodel is the target-setting problem in Taguchi's product design concept: given a desired target MTTF value, find the input parameter(s). A stochastic approximation algorithm of the Robbins-Monro type uses a linear metamodel to estimate the necessary controllable input parameter within a desired accuracy. The potential effectiveness is demonstrated by simulating a reliability system with a known analytical solution.  相似文献   

7.
In some environments the components might not fail fully, but can lead to degradation and the efficiency of the system may decreases. However, the degraded components can be restored back through a proper repair mechanism. In this paper, we present a model to perform reliability analysis of k-out-of-n systems assuming that components are subjected to three states such as good, degraded, and catastrophic failure. We also present expressions for reliability and mean time to failure (MTTF) of k-out-of-n systems. Simple reliability and MTTF expressions for the triple-modular redundant (TMR) system, and numerical examples are also presented in this study.  相似文献   

8.
Mis-Specification Analysis of Linear Degradation Models   总被引:8,自引:0,他引:8  
Degradation models are widely used to assess the lifetime information of highly reliable products if there exists quality characteristics whose degradation over time can be related to reliability. The performance of a degradation model depends strongly on the appropriateness of the model describing a product's degradation path. In this paper, motivated by laser data, we propose a general linear degradation path in which the unit-to-unit variation of all test units can be considered simultaneously with the time-dependent structure in degradation paths. Based on the proposed degradation model, we first derive an implicit expression of a product's lifetime distribution, and its corresponding mean-time-to-failure (MTTF). By using the profile likelihood approach, maximum likelihood estimation of parameters, a product's MTTF, and their confidence intervals can be obtained easily. In addition, laser degradation data are used to illustrate the proposed procedure. Furthermore, we also address the effects of model mis-specification on the prediction of the product's MTTF. It shows that the effect of the model mis-specification on the predictions of a product's MTTF is not critical under the case of large samples. However, when the sample size and the termination time are not large enough, a simulation study shows that these effects are not negligible.  相似文献   

9.
Bit faults induced by single-event upsets in instruction may not cause a system to experience an error. The instruction vulnerability factor (IVF) is first defined to quantify the effect of non-effective upsets on program reliability in this paper; and the mean time to failure (MTTF) model of program memory is then derived based on IVF. Further analysis of MTTF model concludes that the MTTF of program memory using error correcting code (ECC) and scrubbing is not always better than unhardened program memory. The constraints that should be met upon utilizing ECC and scrubbing in program memory are presented for the first time, to the best of authors’ knowledge. Additionally, the proposed models and conclusions are validated by Monte Carlo simulations in MATLAB. These results show that the proposed models have a good accuracy and their margin of error is less than 3 % compared with MATLAB simulation results. It should be highlighted that our conclusions may be used to contribute to selecting the optimal fault-tolerant technique to harden the program memory.  相似文献   

10.
A variation of the Jelinski/Moranda model is described. The main feature of this new model is that the variable (growing) size of a developing program is accommodated, so that the quality of a program can be estimated by analyzing an initial segment of the written code. Two parameters are estimated from the data. The data are: a) time separations between error detections, b) the number of errors per written instruction, c)the failure rate (or finding rate) of a single error, and d) a time record of the number of instructions under test. This model permits predictions of MTTF and error content of any software package which is homogenous with respect to its complexity (error making/finding). It assists in determining the quality, as measured by error contents, early on, and could eliminate the present practice of applying models to the wrong regimes (decreasing failure rate models applied to growing-in-size software packages). The growth model is very tractable analytically. The important requirement for applications is that the error-making rate must be constant across the entire software program.  相似文献   

11.
High fault tolerance for transient faults and low-power consumption are key objectives in the design of critical embedded systems. Systems like smart cards, PDAs, wearable computers, pacemakers, defibrillators, and other electronic gadgets must not only be designed for fault tolerance but also for ultra-low-power consumption due to limited battery life. In this paper, a highly accurate method of estimating fault tolerance in terms of mean time to failure (MTTF) is presented. The estimation is based on circuit-level simulations (HSPICE) and uses a double exponential current-source fault model. Using counters, it is shown that the transient fault tolerance and power dissipation of low-power circuits are at odds and allow for a power fault-tolerance tradeoff. Architecture and circuit level fault tolerance and low-power techniques are used to demonstrate and quantify this tradeoff. Estimates show that incorporation of these techniques results either in a design with an MTTF of 36 years and power consumption of 102 /spl mu/W or a design with an MTTF of 12 years and power consumption of 20 /spl mu/W. Depending on the criticality of the system and the power budget, certain techniques might be preferred over others, resulting in either a more fault tolerant or a lower power design, at the sacrifice of the alternative objective.  相似文献   

12.
This paper presents a NHPP-based SRGM (software reliability growth model) for NVP (N-version programming) systems (NVP-SRGM) based on the NHPP (nonhomogeneous Poisson process). Although many papers have been devoted to modeling NVP-system reliability, most of them consider only the stable reliability, i.e., they do not consider the reliability growth in NVP systems due to continuous removal of faults from software versions. The model in this paper is the first reliability-growth model for NVP systems which considers the error-introduction rate and the error-removal efficiency. During testing and debugging, when a software fault is found, a debugging effort is devoted to remove this fault. Due to the high complexity of the software, this fault might not be successfully removed, and new faults might be introduced into the software. By applying a generalized NHPP model into the NVP system, a new NVP-SRGM is established, in which the multi-version coincident failures are well modeled. A simplified software control logic for a water-reservoir control system illustrates how to apply this new software reliability model. The s-confidence bounds are provided for system-reliability estimation. This software reliability model can be used to evaluate the reliability and to predict the performance of NVP systems. More application is needed to validate fully the proposed NVP-SRGM for quantifying the reliability of fault-tolerant software systems in a general industrial setting. As the first model of its kind in NVP reliability-growth modeling, the proposed NVP SRGM can be used to overcome the shortcomings of the independent reliability model. It predicts the system reliability more accurately than the independent model and can be used to help determine when to stop testing, which is a key question in the testing and debugging phase of the NVP system-development life cycle  相似文献   

13.
This paper presents three newly developed Markov models representing on-surface transit systems. Transit system reliability, steady-state availability, mean time to failure (MTTF) and variance of time to failure formulas are developed. Selective plots are shown for each model. These plots clearly exhibit the impact of various parameters on transit system reliability, steady-state availability, and MTTF.  相似文献   

14.
A model is developed to determine the variance of system reliability estimates and to estimate confidence intervals for series-parallel systems with arbitrarily repeated components. For these systems, different copies of the same component-type are used several or many times within the system, but only a single reliability estimate is available for each distinct component-type. The single estimate is used everywhere the component appears in the system design, and component estimation-error is then magnified at the system-level. The "system-reliability estimate" variance and confidence intervals are derived when the number of component failures follow the binomial distribution with an unknown, yet estimable, probability of failure. The "system-reliability estimate" variance and confidence intervals are obtained by expressing system reliability as a linear sum of products of higher order moments for component unreliability. The generating function is used to determine the moments of the component-unreliability estimates. This model is preferable for many system reliability estimation problems because it does not require independent component and subsystem reliability estimates; it is demonstrated with an example  相似文献   

15.
The MTTF of a system design with constant failure and repair rates and with some forms of stand-by redundancy and switching is an important characteristic of the system. Commonly, calculation of the MTTF requires knowledge of the reliability function R(t), which is integrated to yield the MTTF. In many cases obtaining the reliability function is a non-trivial task and its analytic integration may be quite tedious. In the following we describe a simple method of obtaining the MTTF of such systems which avoids the need of knowledge of R(t). The method is Markovian in nature and is based on summing the probabilities of all the possible routes (in the space of states) by which the system can get from its initial state at t = 0 to an absorbing state (failed state), where each such probability is multiplied by the average time required for the system to follow that route. This weighted sum yields the MTTF for any given initial conditions. The method is demonstrated on some useful systems and analytical formulas for the MTTF are derived. It is further demonstrated how the results of the method may be used in the calculation of the MTBF of the system in steady-state.  相似文献   

16.
The Weibull distribution indexed by scale and shape parameters is generally used as a distribution of lifetime. In determining whether or not a production lot is accepted, one wants the most effective sample size and the acceptance criterion for the specified producer and consumer risks. (μ0 ≡ acceptable MTTF; μ1 ≡ rejectable MTTF). Decide on the most effective reliability test satisfying both constraints: Pr{reject a lot | MTTF = μ0} ⩽ α, Pr{accept a lot | MTTF = μ1 } ⩽ β. α, β are the specified producer, consumer risks. Most reliability tests for assuring MTTF in the Weibull distribution assume that the shape parameter is a known constant. Thus such a reliability test for assuring MTTF in Weibull distribution is concerned only with the scale parameter. However, this paper assumes that there can be a difference between the shape parameter in the acceptable distribution and in the rejectable distribution, and that both the shape parameters are respectively specified as interval estimates. This paper proposes a procedure for designing the most effective reliability test, considering the specified producer and consumer risks for assuring MTTF when the shape parameters do not necessarily coincide with the acceptable distribution and the rejectable distribution, and are specified with the range. This paper assumes that α < 0.5 and β < 0.5. This paper confirms that the procedure for designing the reliability test proposed here applies is practical  相似文献   

17.
The paper criticises the underlying assumptions which have been made in much early modeling of computer software reliability. The following suggestions will improve modeling. 1) Do not apply hardware techniques to software without thinking carefully. Software differs from hardware in important respects; we ignore these at our peril. In particular-2) Do not use MTTF, MTBF for software, unless certain that they exist. Even then, remember that- 3) Distributions are always more informative than moments or parameters; so try to avoid commitment to a single measure of reliability. Anyway- 4) There are better measures than MTTF. Percentiles and failure rates are more intuitively appealing than means. S) Software reliability means operational reliability. Who cares how many bugs are in a program? We should be concerned with their effect on its operation. In fact- 6) Bug identification (and elimination) should be separated from reliability measurement, if only to ensure that the measurers do not have a vested interest in getting good results. 7) Use a Bayesian approach and do not be afraid to be subjective. All our statements will ultimately be about our beliefs in the quality of programs. 8) Do not stop at a reliability analysis; try to model life-time utility (or cost) of programs. 9) Now is the time to devote effort to structural models. 10) Structure should be of a kind appropriate to software, e.g. top-down, modular.  相似文献   

18.
Mean time to failure (MTTF) is one of the most frequently used dependability measures in practice. By convention, MTTF is the expected time for a system to reach any one of the failure states. For some systems, however, the mean time to absorb to a subset of the failure states is of interest. Therefore, the concept of conditional MTTF may well be useful. In this paper, we formalize the definition of conditional MTTF and cumulative conditional MTTF with an efficient computation method in a finite state space Markov model. Analysis of a fault-tolerant disk array system and a fault-tolerant software structure are given to illustrate application of the conditional MTTF.  相似文献   

19.
This paper analyzes and compares the reliability and MTTF of four fault-tolerant memory configurations subject to soft errors, namely (i) the SEC protected RAM (SEC-RAM), (ii) the SEC-unprotected triplex RAM (TMR-RAM), (iii) the triplex SEC-protected RAM (TMR-SEC RAM) and (iv) the SEC-protected triplex RAM (SEC-TMR RAM). The last two configurations are new and their difference lies on the order of performing the voting and decoding operations. Depending on the configuration, memory modeling is accomplished by Markov models either at the bit or at the word level, by also taking into account the canceling of soft errors due to subsequent soft errors. Exact theoretical expressions for the reliability and MTTF of the SEC-RAM and TMR-RAM are developed and two alternative recursive algorithms are given to assess the impact of memory scrubbing on MTTF. The advantage of both the proposed configurations is that they can tolerate all possible error patterns with three errors and they also present a remarkable resistance to error patterns with a much larger number of errors. As the analysis of the SEC-TMR RAM cannot be accomplished theoretically, due to the varying error-patterns of the SEC decoder output for more than one error in a codeword, a fast error-pattern generation algorithm (FEP) is developed. Simulation results show that there exist numerous multiple-bit error patterns in more than two words in the SEC-TMR RAM that upon decoding and voting produce the correct data-word. A comparison of the multiple-bit error masking capability of the TMR-SEC and SEC-TMR is also given.  相似文献   

20.
This paper discusses the development of an improved failure-rate prediction method which can be used to assess the reliability of complex and new-technology microcircuits, especially memories, microprocessors, and their support devices. The prediction models are similar to those presented in MIL-HDBK-217C with several modifications to reflect the variation of reliability sensitive parameters and to discriminate against the device design and usage attributes which contribute to known failure mechanisms. A comparison of the failure rate predictions calculated using MIL-HDBK-217C and the actual failure rates for LSI random logic and memory devices did not indicate a reasonable correlation. An analysis of the 217C models revealed that the lack of correlation was attributable to the generic consolidation of model parameters, which ultimately reduced model sensitivity to several critical reliability factors. The model accuracy was greatly improved, without substantially increasing model complexity, by separating some generic parameters into sets of more detailed parameters. The major model revisions included: ? Complexity factors oriented toward major device function and technology categories ? Development of temperature factors for each device technology, in both hermetic and nonhermetic packages ? Introduction of an additive package failure-rate factor based upon package type and number of functional pins ? Introduction of a voltage derating stress factor for CMOS devices with maximum recommended operating supply voltage greater than 12 volts ? Introduction of a ROM and PROM programming technique factor to reflect the influence of the programming mechanism used in these devices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号