首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Using predeveloped software, a digital safety system is designed that meets the quality standards of a safety system. To demonstrate the quality, the design process and operating history of the product are reviewed along with configuration management practices. The application software of the safety system is developed in accordance with the planned life cycle. Testing, which is a major phase that takes a significant time in the overall life cycle, can be optimized if the testability of the software can be evaluated. The proposed testability measure of the software is based on the entropy of the importance of basic statements and the failure probability from a software fault tree. To calculate testability, a fault tree is used in the analysis of a source code. With a quantitative measure of testability, testing can be optimized. The proposed testability can also be used to demonstrate whether the test cases based on uniform partitions, such as branch coverage criteria, result in homogeneous partitions that is known to be more effective than random testing. In this paper, the testability measure is calculated for the modules of a nuclear power plant's safety software. The module testing with branch coverage criteria required fewer test cases if the module has higher testability. The result shows that the testability measure can be used to evaluate whether partitions have homogeneous characteristics.  相似文献   

2.
This paper quantitatively presents the results of a case study which examines the fault tree analysis framework of the safety of digital systems. The case study is performed for the digital reactor protection system of nuclear power plants. The broader usage of digital equipment in nuclear power plants gives rise to the need for assessing safety and reliability because it plays an important role in proving the safety of a designed system in the nuclear industry. We quantitatively explain the relationship between the important characteristics of digital systems and the PSA result using mathematical expressions. We also demonstrate the effect of critical factors on the system safety by sensitivity study and the result which is quantified using the fault tree method shows that some factors remarkably affect the system safety. They are the common cause failure, the coverage of fault tolerant mechanisms and software failure probability.  相似文献   

3.
This paper introduces a new development for modelling the time-dependent probability of failure on demand of parallel architectures, and illustrates its application to multi-objective optimization of proof testing policies for safety instrumented systems. The model is based on the mean test cycle, which includes the different evaluation intervals that a module goes periodically through its time in service: test, repair and time between tests. The model is aimed at evaluating explicitly the effects of different test frequencies and strategies (i.e. simultaneous, sequential and staggered). It includes quantification of both detected and undetected failures, and puts special emphasis on the quantification of the contribution of the common cause failure to the system probability of failure on demand as an additional component. Subsequently, the paper presents the multi-objective optimization of proof testing policies with genetic algorithms, using this model for quantification of average probability of failure on demand as one of the objectives. The other two objectives are the system spurious trip rate and lifecycle cost. This permits balancing of the most important aspects of safety system implementation. The approach addresses the requirements of the standard IEC 61508. The overall methodology is illustrated through a practical application case of a protective system against high temperature and pressure of a chemical reactor.  相似文献   

4.
5.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified.  相似文献   

6.
Numerical simulators are widely used to model physical phenomena and global sensitivity analysis (GSA) aims at studying the global impact of the input uncertainties on the simulator output. To perform GSA, statistical tools based on inputs/output dependence measures are commonly used. We focus here on the Hilbert–Schmidt independence criterion (HSIC). Sometimes, the probability distributions modeling the uncertainty of inputs may be themselves uncertain and it is important to quantify their impact on GSA results. We call it here the second-level global sensitivity analysis (GSA2). However, GSA2, when performed with a Monte Carlo double-loop, requires a large number of model evaluations, which is intractable with CPU time expensive simulators. To cope with this limitation, we propose a new statistical methodology based on a Monte Carlo single-loop with a limited calculation budget. First, we build a unique sample of inputs and simulator outputs, from a well-chosen probability distribution of inputs. From this sample, we perform GSA for various assumed probability distributions of inputs by using weighted HSIC measures estimators. Statistical properties of these weighted estimators are demonstrated. Subsequently, we define 2nd-level HSIC-based measures between the distributions of inputs and GSA results, which constitute GSA2 indices. The efficiency of our GSA2 methodology is illustrated on an analytical example, thereby comparing several technical options. Finally, an application to a test case simulating a severe accidental scenario on nuclear reactor is provided.  相似文献   

7.
Digital instrumentation and control (I&C) systems can provide important benefits in many safety-critical applications, but they can also introduce potential new failure modes that can affect safety. Unlike electro-mechanical systems, whose failure modes are fairly well understood and which can often be built to fail in a particular way, software errors are very unpredictable. There is virtually no nontrivial software that will function as expected under all conditions. Consequently, there is a great deal of concern about whether there is a sufficient basis on which to resolve questions about safety. In this paper, an approach for validating the safety requirements of digital I&C systems is developed which uses the Dynamic Flowgraph Methodology to conduct automated hazard analyses. The prime implicants of these analyses can be used to identify unknown system hazards, prioritize the disposition of known system hazards, and guide lower-level design decisions to either eliminate or mitigate known hazards. In a case study involving a space-based reactor control system, the method succeeded in identifying an unknown failure mechanism.  相似文献   

8.
王泓 《计测技术》2006,26(2):13-15,35
采用黑盒测试方法对某数据采集系统的测试软件进行了测试.通过对软件需求和性能的分析,建立了软件的运行剖面和测试案例,进行了可靠性测试,得到了该软件输入模块的可靠性测试结果.  相似文献   

9.
The component failure probability estimates from analysis of binomial system testing data are very useful because they reflect the operational failure probability of components in the field which is similar to the test environment. In practice, this type of analysis is often confounded by the problem of data masking: the status of tested components is unknown. Methods in considering this type of uncertainty are usually computationally intensive and not practical to solve the problem for complex systems. In this paper, we consider masked binomial system testing data and develop a probabilistic model to efficiently estimate component failure probabilities. In the model, all system tests are classified into test categories based on component coverage. Component coverage of test categories is modeled by a bipartite graph. Test category failure probabilities conditional on the status of covered components are defined. An EM algorithm to estimate component failure probabilities is developed based on a simple but powerful concept: equivalent failures and tests. By simulation we not only demonstrate the convergence and accuracy of the algorithm but also show that the probabilistic model is capable of analyzing systems in series, parallel and any other user defined structures. A case study illustrates an application in test case prioritization.  相似文献   

10.
When specifying requirements for software controlling hybrid systems and conducting safety analysis, engineers experience that requirements are often known only in qualitative terms and that existing fault tree analysis techniques provide little guidance on formulating and evaluating potential failure modes. In this paper, we propose Causal Requirements Safety Analysis (CRSA) as a technique to qualitatively evaluate causal relationship between software faults and physical hazards. This technique, extending qualitative formal method process and utilizing information captured in the state trajectory, provides specific guidelines on how to identify failure modes and relationship among them. Using a simplified electrical power system as an example, we describe step-by-step procedures of conducting CRSA. Our experience of applying CRSA to perform fault tree analysis on requirements for the Wolsong nuclear power plant shutdown system indicates that CRSA is an effective technique in assisting safety engineers.  相似文献   

11.
In this work, software dependability under memory faults in the operational phase is predicted by two models: an analytic model and the stochastic activity network (SAN) model. The analytic model is based on the simple reliability theory and the graph theory, which represents the software as a graph composed of nodes and arcs. Through proper transformation, the graph can be reduced to a simple two-node graph from which software reliability can be derived. The SAN model permits the representation of concurrency, timeliness, fault tolerance, and degradable performance of the system and provides a means for determining the stochastic behavior of a software.Using these models, we predict the reliability of an application software in a digital system, Interposing Logic System (ILS), in a nuclear power plant and show the sensitivity of software reliability to major physical parameters which affect software failure in the normal operation phase. It is found that the effects of hardware faults on software failure should be considered for the accurate prediction of software dependability in the operation phase.  相似文献   

12.
VVS Sarma  D Vijay Rao 《Sadhana》1997,22(1):121-132
In today’s competitive environment for software products, quality is an important characteristic. The development of large-scale software products is a complex and expensive process. Testing plays a very important role in ensuring product quality. Improving the software development process leads to improved product quality. We propose a queueing model based on re-entrant lines to depict the process of software modules undergoing testing/debugging, inspections and code reviews, verification and validation, and quality assurance tests before being accepted for use. Using the re-entrant line model for software testing, bounds on test times are obtained by considering the state transitions for a general class of modules and solving a linear programming model. Scheduling of software modules for tests at each process step yields the constraints for the linear program. The methodology presented is applied to the development of a software system and bounds on test times are obtained. These bounds are used to allocate time for the testing phase of the project and to estimate the release times of software.  相似文献   

13.
Industrial software companies developing safety-critical systems are required to use rigorous safety analysis techniques to demonstrate compliance to regulatory bodies. In this paper, we describe an approach to formal verification of functional properties of requirements for an embedded real-time software written in software cost reduction (SCR)-style language using PVS specification and verification system. Key contributions of the paper include development of an automated method of translating SCR-style requirements into PVS input language as well as identification of property templates often needed in verification. Using specification for a nuclear power plant system, currently in operation, we demonstrate how safety demonstration on requirements can be accomplished while taking advantage of assurance provided by formal methods.  相似文献   

14.
The quantization error of a quantizer (ideal A/D converter) is investigated. Correlation between quantization error and quantizer input is considered. The input signal is taken as a sinusoid due to its importance in instrumentation systems. The cases of no dither, uniform dither, and discrete (digital) dither are considered. Effects of the dither probability density function (PDF) are discussed. The relationship between uniform dithered and discrete dithered quantizer inputs is derived. Spectra of the average quantization error, corresponding to an arbitrary input signal, are investigated. Different dither forms (Gaussian, uniform, and discrete) are compared, and the effects of the dither PDF are discussed. A quantitative basis for comparing dither forms, and hence, selecting the one most appropriate for a particular application, is provided  相似文献   

15.
Statistical analysis of fracture in graphite   总被引:1,自引:0,他引:1  
A statistical model is proposed to study the fracture of graphite. This model, based on a more general one proposed by She et al., uses a local fracture criterion for a microcrack, a distribution function for microcracks and the weakest link principle to predict failure probability as a function of applied loading and distribution of microcracks. The model considers the effect of the three-dimensional stress state and therefore gives a general representation of both the effects of complex loading and microcrack distribution. The inputs to the model can be determined by either studying the microstructural features of the graphite or by choosing inputs to make the model prediction fit a set of experimental results. The latter method is used to apply the model to failure results of Rose and Tucker. One set of results is used to calibrate the model which is then applied to predict the failure behavior of a second set of experimental results. Finally, the effect of the various input variables on failure probability is studied first by considering a graphical representative of failure probability as a function of input variables and second by writing the equations in terms of nondimensional variables.  相似文献   

16.
A hard real-time system, such as a fly-by-wire system, fails catastrophically (e.g. losing stability) if its control inputs are not updated by its digital controller computer within a certain timing constraint called the hard deadline. To assess and validate those systems’ reliabilities by using a semi-Markov model that explicitly contains the deadline information, we propose a path-space approach deriving the upper and lower bounds of the probability of system failure. These bounds are derived by using only simple parameters, and they are especially suitable for highly reliable systems which should recover quickly. Analytical bounds are derived for both exponential and Weibull failure distributions encountered commonly, which have proven effective through numerical examples, while considering three repair strategies: repair-as-good-as-new, repair-as-good-as-old, and repair-better-than-old.  相似文献   

17.
邢月卿  刘乘  黄威  张瑜  孙德强 《包装工程》2016,37(13):71-76
目的介绍一种缓冲材料蠕变测试系统的设计方法,该系统包括测试装置的硬件设计和相应的测试软件开发。方法设计一种新的缓冲材料蠕变特性的测试装置,并用Lab VIEW软件开发其测试软件系统,采用PC机自带的声卡作为数据采集卡,配合传感器、压频(V/F)转换器等硬件对缓冲材料在一定静压下的厚度变化量进行数据采集与分析,得出缓冲材料蠕变特性曲线。结果位移传感器采集缓冲材料随时间的位移形变,经信号调理、低通滤波后,输入给以LM331芯片所组成的V/F转换器,声卡采集该转换器所输出的频率信号并转换为数字信号,计算机读取该数字信号并在测试软件界面输出蠕变-时间、位移-时间、应力-时间等曲线。结论该测试装置和软件测试系统操作简单,使用PC自带声卡替代昂贵的数据采集卡作为数据采集系统,降低了成本,灵活性和可靠性较高,使缓冲材料的蠕变特性测试更加方便快捷,为更好地掌握缓冲材料蠕变特性进行缓冲设计提供了强有力的工具。  相似文献   

18.
This work presents a framework for predicting the unknown probability distributions of input parameters, starting from scarce experimental measurements of other input parameters and the Quantity of Interest (QoI), as well as a computational model of the system. This problem is relevant to aeronautics, an example being the calculation of the material properties of carbon fibre composites, which are often inferred from experimental measurements of the full-field response. The method presented here builds a probability distribution for the missing inputs with an approach based on probabilistic equivalence. The missing inputs are represented with a multi-modal Polynomial Chaos Expansion (mmPCE), a formulation which enables the algorithm to efficiently handle multi-modal experimental data. The parameters of the mmPCE are found through an optimisation process. The mmPCE is used to produce a dataset for the missing inputs, the input uncertainties are then propagated through the computational model of the system using arbitrary Polynomial Chaos (aPC) in order to produce a probability distribution for the QoI. This is in addition to an estimate of the QoI’s probability distribution arising from experimental measurements. The coefficients of the mmPCE are adjusted such that the statistical distance between the two estimates of the probability distribution of the QoI is minimised. The algorithm has two key aspects: the metric used to quantify the statistical distance between distributions and the aPC formulation used to propagate the input uncertainties. In this work the Kolmogorov–Smirnov (KS) distance was used to quantify the distance between probability distributions for the QoI as it allowed high order statistical moments to be matched and is non-parametric.The framework for back-calculating unknown input distributions was demonstrated using a dataset comprising scarce experimental measurements of the material properties of a batch of carbon fibre coupons. The ability of the algorithm to back-calculate a distribution for the shear and compression strength of the composite, based on limited experimental data, was demonstrated. It was found that it was possible to recover reasonably accurate probability distributions for the missing material properties, even when an extremely scarce data set with a fairly simplistic computational model was used.  相似文献   

19.
An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg–Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches.  相似文献   

20.
The aim of this work is to predict the failure probability of a locking system. This failure probability is assessed using complementary methods: the First-Order Reliability Method (FORM) and Second-Order Reliability Method (SORM) as approximated methods, and Monte Carlo simulations as the reference method. Both types are implemented in a specific software [Phimeca software. Software for reliability analysis developed by Phimeca Engineering S.A.] used in this study. For the Monte Carlo simulations, a response surface, based on experimental design and finite element calculations [Abaqus/Standard User’s Manuel vol. I.], is elaborated so that the relation between the random input variables and structural responses could be established. Investigations of previous reliable methods on two configurations of the locking system show the large sturdiness of the first one and enable design improvements for the second one.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号