首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 8 毫秒
1.
Systems designed for high availability and fault tolerance are often configured as a series combination of redundant subsystems. When a unit of a subsystem fails, the system remains operational while the failed unit is repaired; however, if too many units in a subsystem fail concurrently, the system fails. Under conditions usually met in practical situations, we show that the reliability and availability of such systems can be accurately modeled by representing each redundant subsystem with a constant, ‘effective’ failure rate equal to the inverse of the subsystem mean‐time‐to‐failure (MTTF). The approximation model is surprisingly accurate, with an error on the order of the square of the ratio mean‐time‐to‐repair to mean‐time‐to‐failure (MTTR/MTTF), and it has wide applicability for commercial, high‐availability and fault‐tolerant computer systems. The effective subsystem failure rates can be used to: (1) evaluate the system and subsystem reliability and availability; (2) estimate the system MTTF; and (3) provide a basis for the iterative analysis of large complex systems. Some observations from renewal theory suggest that the approximate models can be used even when the unit failure rates are not constant and when the redundant units are not homogeneous. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
Real‐time computer systems deployed in life‐critical control applications must be designed to meet stringent reliability specifications. The minimum acceptable degree of reliability for systems of this type is ‘7 nines’, which is not generally achieved. This paper aims at contributing to the achievement of that degree of reliability. To this end, this paper proposes a classification scheme of the fault‐tolerant procedures for redundant computer systems (RCSs). The proposed classification scheme is developed on the basis of the number of counteracted fault types. Table I is created to relate the characteristics of the RCSs to the characteristics of the fault‐tolerant procedures. A selection algorithm is proposed, which allows designers to select the optimal type of fault‐tolerant procedures according to the system characteristics and capabilities. The fault‐tolerant procedure, which is selected by this algorithm, provides the required degree of reliability for a given RCS. According to the proposed graphical model only a part of the fault‐tolerant procedure is executed depending on the absence or presence (type and sort) of faults. The proposed methods allow designers to counteract Byzantine and non‐Byzantine fault types during degradation of RCSs from N to 3, and only the non‐Byzantine fault type during degradation from 3 to 1 with optimal checkpoint time period. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
In this study, we introduce reliability models for a device with two dependent failure processes: soft failure due to degradation and hard failure due to random shocks, by considering the declining hard failure threshold according to changes in degradation. Owing to the nature of degradation for complex devices such as microelectromechanical systems, a degraded system is more vulnerable to force and stress during operation. We address two different scenarios of the changing hard failure threshold due to changes in degradation. In Case 1, the initial hard failure threshold value reduces to a lower level as soon as the overall degradation reaches a critical value. In Case 2, the hard failure threshold decreases gradually and the amount of reduction is proportional to the change in degradation. A condition‐based maintenance model derived from a failure limit policy is presented to ensure that a device is functioning under a certain level of degradation. Finally, numerical examples are illustrated to explain the developed reliability and maintenance models, along with sensitivity analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
The growing demand for safety, reliability, availability and maintainability in modern technological systems has led these systems to become more and more complex. To improve their dependability, many features and subsystems are employed like the diagnosis system, control system, backup systems, and so on. These subsystems have all their own dynamic, reliability and performances and interact with each other in order to provide a dependable and fault‐tolerant system. This makes the dependability analysis and assessment very difficult. This paper proposes a method to completely model the diagnosis procedure in fault‐tolerant systems using stochastic activity networks. Combined with Monte Carlo simulation, this will allow the dependability assessment by including the diagnosis parameters and performances explicitly. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
This paper firstly presents the existence and uniqueness properties of the intersection time between two neighboring shocks or between a shock and a characteristic for the analytical shock‐fitting algorithm that was proposed to solve the Lighthill–Whitham–Richards (LWR) traffic flow model with a linear speed–density relationship in accordance with the monotonicity properties of density variations along a shock, which have greatly improved the robustness of the analytical shock‐fitting algorithm. Then we discuss the efficient evaluation of the measure of effectiveness (MOE) of the analytical shock‐fitting algorithm. We develop explicit expressions to calculate the MOE–which is the total travel time that is incurred by travelers, within the space‐time region that is encompassed by the shocks and/or characteristic lines. A numerical example is used to illustrate the effectiveness and efficiency of the proposed method compared with the numerical solutions that are obtained by a fifth‐order weighted essentially non‐oscillatory scheme. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

6.
The aim of this paper is to investigate the issue of real‐time reliability evaluation based on a general Wiener process‐based degradation model. With its mathematical tractability, the Wiener process with a linear drift has been widely used in the literature, to characterize the dynamics of the degradation process or its transformation. However, the nonlinear degradation process, which can't be properly linearized, exists in practice. The dynamics of such a degradation process can't be accurately captured by linear models. Here, a general Wiener process‐based degradation model is proposed, which covers a variety of Wiener process‐based models as its limiting cases. A two‐stage method is presented to estimate the unknown parameters. Two real‐time reliability evaluation procedures are presented for different conditions: one is the analytical evaluation procedure, and the other is the simulated evaluation procedure. It is shown that when new degradation information is available, the evaluation results can be adaptively updated. Moreover, to check out the proposed degradation model, a graphical method is provided. Finally, the validity of the proposed evaluation method is illustrated by a numerical example and two real‐world examples. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, we propose an offline and online machine health assessment (MHA) methodology composed of feature extraction and selection, segmentation‐based fault severity evaluation, and classification steps. In the offline phase, the best representative feature of degradation is selected by a new filter‐based feature selection approach. The selected feature is further segmented by utilizing the bottom‐up time series segmentation to discriminate machine health states, ie, degradation levels. Then, the health state fault severity is extracted by a proposed segment evaluation approach based on within segment rate‐of‐change (RoC) and coefficient of variation (CV) statistics. To train supervised classifiers, a priori knowledge about the availability of the labeled data set is needed. To overcome this limitation, the health state fault‐severity information is used to label (eg, healthy, minor, medium, and severe) unlabeled raw condition monitoring (CM) data. In the online phase, the fault‐severity classification is carried out by kernel‐based support vector machine (SVM) classifier. Next to SVM, the k‐nearest neighbor (KNN) is also used in comparative analysis on the fault severity classification problem. Supervised classifiers are trained in the offline phase and tested in the online phase. Unlike to traditional supervised approaches, this proposed method does not require any a priori knowledge about the availability of the labeled data set. The proposed methodology is validated on infield point machine sliding‐chair degradation data to illustrate its effectiveness and applicability. The results show that the time series segmentation‐based failure severity detection and SVM‐based classification are promising.  相似文献   

8.
Conformal epoxy‐rich coatings are synthesized by plasma initiated chain‐growth polymerization of glycidyl methacrylate via a newly developed Plasma initiated chemical vapor deposition method at atmospheric pressure to provide a functional platform for the immobilization of enzymes degrading antibiotics (laccase and β‐lactamase). In addition to enhance the enzymes activity duration and intensity, surface immobilization is also leading to enzyme structure rigidification, allowing them to endure mechanical stresses generated by a laminar water flow of 30 km h−1, and this with no reduction of their enzymatic activity. Self‐defensive surface properties against microorganism's adhesion, preventing the enzyme alteration and improving the degradation performances, are obtained via surface saturation with Tween 20. The developed method is scaled up to high specific surface high‐density polyethy­lene biochips commonly used in water treatment, and shows self‐defensive abilities and particularly long lasting efficient degradation properties.  相似文献   

9.
A model‐based scheme is proposed for monitoring multiple gamma‐distributed variables. The procedure is based on the deviance residual, which is a likelihood ratio statistic for detecting a mean shift when the shape parameter is assumed to be unchanged and the input and output variables are related in a certain manner. We discuss the distribution of this statistic and the proposed monitoring scheme. An example involving the advance rate of a drill is used to illustrate the implementation of the deviance residual monitoring scheme. Finally, a simulation study is performed to compare the average run length (ARL) performance of the proposed method to the standard Shewhart control chart for individuals. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper a self‐compensating pulse code modulation compound flow control valve and its self‐compensating algorithm are introduced. After adopting a self‐compensating method, the compound valve can maintain good control quality when one or more turn on/off valves (TOVs) fail by adjusting the activities of unfailed TOVs. A stochastic fault model for the compound valve is established and a Monte Carlo approach is used to calculate its life distribution. The results indicate that there is about 20–50% increase of mean controllable life. This might be of great importance when immediate emergency shutdown is not allowable or too costly, such as in the case of aircraft control and in the control of continuous processes. The extra life can leave a large enough time margin to plan a more graceful shutdown and maintenance. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
The theory of network reliability has been applied to many complicated network structures, such as computer and communication networks, piping systems, electricity networks, and traffic networks. The theory is used to evaluate the operational performance of networks that can be modeled by probabilistic graphs. Although evaluating network reliability is an Non‐deterministic Polynomial‐time hard problem, numerous solutions have been proposed. However, most of them are based on sequential computing, which under‐utilizes the benefits of multi‐core processor architectures. This paper addresses this limitation by proposing an efficient strategy for calculating the two‐terminal (terminal‐pair) reliability of a binary‐state network that uses parallel computing. Existing methods are analyzed. Then, an efficient method for calculating terminal‐pair reliability based on logical‐probabilistic calculus is proposed. Finally, a parallel version of the proposed algorithm is developed. This is the first study to implement an algorithm for estimating terminal‐pair reliability in parallel on multi‐core processor architectures. The experimental results show that the proposed algorithm and its parallel version outperform an existing sequential algorithm in terms of execution time. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, we advanced a new interval reliability analysis model for fracture reliability analysis. Based on the non‐probabilistic stress intensity factor interference model and the ratio of the volume of the safe region to the total volume of the region associated with the variation of the standardized interval variables is suggested as the measure of structural non‐probabilistic reliability. We use this theory to calculate the reliability of structure based on fracture criterion. This model needs less uncertain information, so it has less limitation for analysing an uncertain structure or system. Examples of practical application are given to explain the simplicity and practicability of this model by comparing the interval reliability analysis model with probabilistic reliability analysis model.  相似文献   

13.
14.
A method of obtaining a set of closed‐form solutions for reliability acceleration factors under time‐varying temperature/humidity environments is presented. This paper outlines the procedure for generating such equations based on National Oceanographic and Atmospheric Administration climatic data. Results from Phoenix, Arizona are used as a case study. Predictions from these closed‐form solutions are compared with traditional reliability assessments and exact solutions using raw ambient temperature and humidity values. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
Reduced‐order models that are able to approximate output quantities of interest of high‐fidelity computational models over a wide range of input parameters play an important role in making tractable large‐scale optimal design, optimal control, and inverse problem applications. We consider the problem of determining a reduced model of an initial value problem that spans all important initial conditions, and pose the task of determining appropriate training sets for reduced‐basis construction as a sequence of optimization problems. We show that, under certain assumptions, these optimization problems have an explicit solution in the form of an eigenvalue problem, yielding an efficient model reduction algorithm that scales well to systems with states of high dimension. Furthermore, tight upper bounds are given for the error in the outputs of the reduced models. The reduction methodology is demonstrated for a large‐scale contaminant transport problem. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
We introduce a material model for the simulation of polycrystalline materials undergoing solid‐to‐solid phase‐transformations. As a basis, we present a scalar‐valued phase‐transformation model where a Helmholtz free energy function depending on volumetric and deviatoric strain measures is assigned to each phase. The analysis of the related overall Gibbs energy density allows for the calculation of energy barriers. With these quantities at hand, we use a statistical‐physics‐based approach to determine the resulting evolution of volume fractions. Though the model facilitates to take into account an arbitrary number of solid phases of the underlying material, we restrict this work to the simulation of phase‐transformations between an austenitic parent phase and a martensitic tension and compression phase. The scalar model is embedded into a computational micro‐sphere formulation in view of the simulation of three‐dimensional boundary value problems. The final modelling approach necessary for macroscopic simulations is accomplished by a finite element formulation, where the local material behaviour at each integration point is governed by the response of the micro‐sphere model.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Type‐I interval‐censoring scheme only documents the number of failed units within two prespecified consecutive exam times at the larger time point after putting all units on test at the initial time schedule. It is challenging to use the collected information from type‐I interval‐censoring scheme to evaluate the reliability of unit when not all admitted units are operated or tested at the same initial time and a majority of units are randomly selected to replace the failed test units at unrecorded time points. Moreover, the lifetime distribution of all pooled units from dual resources usually follows a mixture distribution. To overcome these two problems, a two‐stage inference process that consists of a data‐cleaning step and a parameter estimation step via either Markov chain Monte Carlo (MCMC) algorithm or profile likelihood method is proposed based on the contaminated type‐I interval‐censored sample from a mixture distribution with unknown proportion. An extensive simulation study is conducted under the mixture smallest extreme value distributions to evaluate the performance of the proposed method for a case study. Finally, the proposed methods are applied to the mixture lifetime distribution modeling of video graphics array adapters for the support of reliability decision.  相似文献   

18.
To be feasible for computationally intensive applications such as parametric studies, optimization, and control design, large‐scale finite element analysis requires model order reduction. This is particularly true in nonlinear settings that tend to dramatically increase computational complexity. Although significant progress has been achieved in the development of computational approaches for the reduction of nonlinear computational mechanics models, addressing the issue of contact remains a major hurdle. To this effect, this paper introduces a projection‐based model reduction approach for both static and dynamic contact problems. It features the application of a non‐negative matrix factorization scheme to the construction of a positive reduced‐order basis for the contact forces, and a greedy sampling algorithm coupled with an error indicator for achieving robustness with respect to model parameter variations. The proposed approach is successfully demonstrated for the reduction of several two‐dimensional, simple, but representative contact and self contact computational models. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
20.
With the increase of product reliability, collecting time‐to‐failure data is becoming difficult, and degradation‐based method has gained popularity. In this paper, a novel multi‐hidden semi‐Markov model is proposed to identify degradation and estimate remaining useful life of a system. Multiple fused features are used to describe the degradation process so as to improve the effectiveness and accuracy. The similarities of the features are depicted by a new variable combined with forward and backward variables to reduce computational effort. The degradation state is identified using modified Viterbi algorithm, in which linear function is adopted to describe the contribution of each feature to the state recognition. Subsequently, the remaining useful life can be forecasted by backward recursive equations. A case study is presented, and the results demonstrate the validity and effectiveness of the proposed method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号