首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Research in the field of software reliability, dedicated to the analysis of software failure processes, is quite diverse. In recent years, several attractive rate-based simulation approaches have been proposed. Thus far, it appears that most existing simulation approaches do not take into account the number of available debuggers (or developers). In practice, the number of debuggers will be carefully controlled. If all debuggers are busy, they may not address newly detected faults for some time. Furthermore, practical experience shows that fault-removal time is not negligible, and the number of removed faults generally lags behind the total number of detected faults, because fault detection activities continue as faults are being removed. Given these facts, we apply the queueing theory to describe and explain possible debugging behavior during software development. Two simulation procedures are developed based on G/G/$infty$ , and G/G/m queueing models, respectively. The proposed methods will be illustrated using real software failure data. The analysis conducted through the proposed framework can help project managers assess the appropriate staffing level for the debugging team from the standpoint of performance, and cost-effectiveness.   相似文献   

2.
Software reliability modeling & estimation plays a critical role in software development, particularly during the software testing stage. Although there are many research papers on this subject, few of them address the realistic time delays between fault detection and fault correction processes. This paper investigates an approach to incorporate the time dependencies between the fault detection, and fault correction processes, focusing on the parameter estimations of the combined model. Maximum likelihood estimates of combined models are derived from an explicit likelihood formula under various time delay assumptions. Various characteristics of the combined model, like the predictive capability, are also analyzed, and compared with the traditional least squares estimation method. Furthermore, we study a direct, useful application of the proposed model & estimation method to the classical optimal release time problem faced by software decision makers. The results illustrate the effect of time delay on the optimal release policy, and the overall software development cost.  相似文献   

3.
Over the past 30 years, many software reliability growth models (SRGM) have been proposed. Often, it is assumed that detected faults are immediately corrected when mathematical models are developed. This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault, the skill and experience of personnel, the size of debugging team, the technique(s) being used, and so on. During software testing, practical experiences show that mutually independent faults can be directly detected and removed, but mutually dependent faults can be removed iff the leading faults have been removed. That is, dependent faults may not be immediately removed, and the fault removal process lags behind the fault detection process. In this paper, we will first give a review of fault detection & correction processes in software reliability modeling. We will then illustrate the fact that detected faults cannot be immediately corrected with several examples. We also discuss the software fault dependency in detail, and study how to incorporate both fault dependency and debugging time lag into software reliability modeling. The proposed models are fairly general models that cover a variety of known SRGM under different conditions. Numerical examples are presented, and the results show that the proposed framework to incorporate both fault dependency and debugging time lag for SRGM has a better prediction capability. In addition, an optimal software release policy for the proposed models, based on cost-reliability criterion, is proposed. The main purpose is to minimize the cost of software development when a desired reliability objective is given.  相似文献   

4.
A software reliability model presented here assumes a time-dependent failure rate and that debugging can remove as well as add faults with a nonzero probability. Based on these assumptions, the expected number of faults and mean standard error of the estimated faults remaining in the system are derived. The model treats the capability of correcting errors as a random process under which most of the existing software reliability models become special cases of this proposed one. It, therefore, serves to realize a competing risk problem and to unify much of the current software reliability theory. The model deals with the nonindependence of error correction and should be extremely valuable for a large-scale software project.  相似文献   

5.
Software reliability is often defined as the probability of failure-free software operation for a specified period of time in a specified environment. During the past 30 years, many software reliability growth models (SRGM) have been proposed for estimating the reliability growth of software. In practice, effective debugging is not easy because the fault may not be immediately obvious. Software engineers need time to read, and analyze the collected failure data. The time delayed by the fault detection & correction processes should not be negligible. Experience shows that the software debugging process can be described, and modeled using queueing system. In this paper, we will use both finite, and infinite server queueing models to predict software reliability. We will also investigate the problem of imperfect debugging, where fixing one bug creates another. Numerical examples based on two sets of real failure data are presented, and discussed in detail. Experimental results show that the proposed framework incorporating both fault detection, and correction processes for SRGM has a fairly accurate prediction capability.  相似文献   

6.
This paper presents a Bayes nonparametric approach for tracking and predicting software reliability. We use the common assumptions on the software operational environment to get a stochastic model where the successive times between software failures are exponentially distributed; their failure rates have Markov priors. Under these general assumptions we give Bayes estimates of the parameters that assess and predict the software reliability. We give algorithms (based on Monte-Carlo methods) to compute these Bayes estimates. Our approach allows the reliability analyst to construct a personal software reliability model simply by specifying the available prior knowledge; afterwards the results in this paper can be used to get Bayes estimates of the useful reliability parameters. Examples of possible prior physical knowledge concerning the software testing and correction environments are given. The maximum-entropy principle is used to translate this knowledge to prior distributions on the failure-rate process. Our approach is used to study some simulated and real failure data sets  相似文献   

7.
Fault tolerant software uses redundancy to improve reliability; but such redundancy requires additional resources and tends to be costly, therefore the redundancy level needs to be optimized. Our optimization models determine the optimal level of redundancy within a software system under the assumption that functionally equivalent software components fail independently. A framework illustrates the tradeoff between the cost of using N-version programming and the improved reliability for a software system. The 2 models deal with: a single task, and multitask software. These software systems consist of several modules where each module performs a subtask and, by sequential execution of modules, a major task is performed. Major assumptions are: 1) several versions of each module, each with an estimated cost and reliability, are available, 2) these module versions fail independently. Optimization models are used to select the optimal set of versions for each module such that the system reliability is maximized and total cost remains within budget  相似文献   

8.
In this paper, we discuss a software reliability growth model with a learning factor for imperfect debugging based on a non-homogeneous Poisson process (NHPP). Parameters used in the model are estimated. An optimal release policy is obtained for a software system based on the total mean profit and reliability criteria. A random software life-cycle is also incorporated in the discussion. Numerical results are presented in the final section.  相似文献   

9.
Past research in software reliability concentrated on reliability growth of one-version software. This work proposes models for describing the dependency of N-version software. The models are illustrated via a logarithmic Poisson execution-time model by postulating assumptions of dependency among nonhomogeneous Poisson processes (NHPPs) of debugging behavior. Two-version, three-version, and general N-version models are proposed. The redundancy techniques discussed serve as a basis for fault-tolerant software design. The system reliability, related performance measures, and parameter estimation of model parameters when N=2 are presented. Based on the assumption of linear dependency among the NHPPs, two types of models are developed. The analytical models are useful primarily in estimating and monitoring software reliability of fault-tolerant software. Without considering dependency of failures, the estimation of reliability would not be conservative  相似文献   

10.
Two broad categories of human error occur during software development: (1) development errors made during requirements analysis, design, and coding activities; (2) debugging errors made during attempts to remove faults identified during software inspections and dynamic testing. This paper describes a stochastic model that relates the software failure intensity function to development and debugging error occurrence throughout all software life-cycle phases. Software failure intensity is related to development and debugging errors because data on development and debugging errors are available early in the software life-cycle and can be used to create early predictions of software reliability. Software reliability then becomes a variable which can be controlled up front, viz, as early as possible in the software development life-cycle. The model parameters were derived based on data reported in the open literature. A procedure to account for the impact of influencing factors (e.g., experience, schedule pressure) on the parameters of this stochastic model is suggested. This procedure is based on the success likelihood methodology (SLIM). The stochastic model is then used to study the introduction and removal of faults and to calculate the consequent failure intensity value of a small-software developed using a waterfall software development  相似文献   

11.
In this paper, we study the impact of software testing effort & efficiency on the modeling of software reliability, including the cost for optimal release time. This paper presents two important issues in software reliability modeling & software reliability economics: testing effort, and efficiency. First, we propose a generalized logistic testing-effort function that enjoys the advantage of relating work profile more directly to the natural flow of software development, and can be used to describe the possible testing-effort patterns. Furthermore, we incorporate the generalized logistic testing-effort function into software reliability modeling, and evaluate its fault-prediction capability through several numerical experiments based on real data. Secondly, we address the effects of new testing techniques or tools for increasing the efficiency of software testing. Based on the proposed software reliability model, we present a software cost model to reflect the effectiveness of introducing new technologies. Numerical examples & related data analyzes are presented in detail. From the experimental results, we obtain a software economic policy which provides a comprehensive analysis of software based on cost & test efficiency. Moreover, the policy can also help project managers determine when to stop testing for market release at the right time.  相似文献   

12.
Software-testing (debugging) is one of the most important components in software development. An important question in the debugging process is, when to stop. The choice is usually based on one of two decision criteria: (1) when the reliability has reached a given threshold, and (2) when the gain in reliability cannot justify the testing cost. Various stopping rules and software reliability models are compared by their ability to deal with these two criteria. Two new stopping rules, initiated by theoretical study of the optimal stopping rule based on cost, are more stable than other rules for a large variety of bug structures. The 1-step-ahead stopping rules based on the Musa et. al. basic execution and logarithmic Poisson models, as well as the stopping rule by Dalal and Mallows (1990), work well for software with many relatively small bugs (bugs with very low occurrence rates). The comparison was done by simulation  相似文献   

13.
Failure correlation in software reliability models   总被引:4,自引:0,他引:4  
Perhaps the most stringent restriction in most software reliability models is the assumption of statistical independence among successive software failures. The authors research was motivated by the fact that although there are practical situations in which this assumption could be easily violated, much of the published literature on software reliability modeling does not seriously address this issue. The research work in this paper is devoted to developing the software reliability modeling framework that can consider the phenomena of failure correlation and to study its effects on the software reliability measures. The important property of the developed Markov renewal modeling approach is its flexibility. It allows construction of the software reliability model in both discrete time and continuous time, and (depending on the goals) to base the analysis either on Markov chain theory or on renewal process theory. Thus, their modeling approach is an important step toward more consistent and realistic modeling of software reliability. It can be related to existing software reliability growth models. Many input-domain and time-domain models can be derived as special cases under the assumption of failure s-independence. This paper aims at showing that the classical software reliability theory can be extended to consider a sequence of possibly s-dependent software runs, viz, failure correlation. It does not deal with inference nor with predictions, per se. For the model to be fully specified and applied to estimations and predictions in real software development projects, we need to address many research issues, e.g., the detailed assumptions about the nature of the overall reliability growth, way modeling-parameters change as a result of the fault-removal attempts  相似文献   

14.
This paper presents a new methodology for predicting software reliability in the field environment. Our work differs from some existing models that assume a constant failure detection rate for software testing and field operation environments, as this new methodology considers the random environmental effects on software reliability. Assuming that all the random effects of the field environments can be captured by a unit-free environmental factor,$eta$, which is modeled as a random-distributed variable, we establish a generalized random field environment (RFE) software reliability model that covers both the testing phase and the operating phase in the software development cycle. Based on the generalized RFE model, two specific random field environmental reliability models are proposed for predicting software reliability in the field environment: the$gamma$-RFE model, and the$beta$-RFE model. A set of software failure data from a telecommunication software application is used to illustrate the proposed models, both of which provide very good fittings to the software failures in both testing and operation environments. This new methodology provides a viable way to model the user environments, and further makes adjustments to the reliability prediction for similar software products. Based on the generalized software reliability model, further work may include the development of software cost models and the optimum software release policies under random field environments.  相似文献   

15.
A learning curve property, originally prescribed for describing reliability growth when time to failure is observed, is applied to discrete reliability growth processes for systems for which only discrete success or failure events are observed. Derivations leading to a well-known discrete reliability growth model are presented, and the role of implicit assumptions relating to testing strategy is identified. An alternative testing strategy, particularly appropriate for destructive testing of very expensive systems, is proposed and a new discrete reliability growth model is derived. Extensions to both types of discrete reliability growth models, which account for the distinction between assignable cause (readily correctable failure models) and nonassignable cause (state-of-the-art failure modes), are also provided. Parameter estimates for each of the models can be obtained via usual maximum likelihood procedures  相似文献   

16.
Over the last several decades, many Software Reliability Growth Models (SRGM) have been developed to greatly facilitate engineers and managers in tracking and measuring the growth of reliability as software is being improved. However, some research work indicates that the delayed S-shaped model may not fit the software failure data well when the testing-effort spent on fault detection is not a constant. Thus, in this paper, we first review the logistic testing-effort function that can be used to describe the amount of testing-effort spent on software testing. We describe how to incorporate the logistic testing-effort function into both exponential-type, and S-shaped software reliability models. The proposed models are also discussed under both ideal, and imperfect debugging conditions. Results from applying the proposed models to two real data sets are discussed, and compared with other traditional SRGM to show that the proposed models can give better predictions, and that the logistic testing-effort function is suitable for incorporating directly into both exponential-type, and S-shaped software reliability models  相似文献   

17.
Limitations of some analytic techniques in approximating the reliability of life-critical electronic systems are discussed, and a framework for the specification of recovery and fault- handling submodels is suggested. The framework makes full use of the instantaneous jump theorem by viewing the collection of interfering, premature exits from any fault handling and recovery submodel as defining a new, competing process submodel. This approach allows a greater flexibility in submodel representation, since submodels may contain arbitrary entrance arcs, exit arcs, and competing, interfering transitions with arbitrary destinations. Since the effects of near-coincident faults need not be represented as system failure events, reliability estimates produced by this approach need not be unduly conservative. Comparisons on small models, where exact results can be computed, show substantial improvement in accuracy over earlier techniques. Implementation of the technique in an X Windows-based system, XHARP, is described. The dual top-down/bottom-up interface of XHARP provides added flexibility by allowing an automated behavioral decomposition that is based on the suggested framework  相似文献   

18.
This paper describes a different approach to software reliability growth modeling which enables long-term predictions. Using relatively common assumptions, it is shown that the average value of the failure rate of the program, after a particular use-time, t, is bounded by N/(e·t), where N is the initial number of faults. This is conservative since it places a worst-case bound on the reliability rather than making a best estimate. The predictions might be relatively insensitive to assumption violations over the longer term. The theory offers the potential for making long-term software reliability growth predictions based solely on prior estimates of the number of residual faults. The predicted bound appears to agree with a wide range of industrial and experimental reliability data. Less pessimistic results can be obtained if additional assumptions are made about the failure rate distribution of faults  相似文献   

19.
Little work has been done on extending existing models with imperfect debugging to the more realistic situation where new faults are generated from unsuccessful attempts at removing faults completely. This paper presents a software-reliability growth model which incorporates the possibility of introducing new faults into a software system due to the imperfect debugging of the original faults in the system. The original faults manifest themselves as primary failures and are assumed to be distributed as a nonhomogeneous Poisson process (NHPP). Imperfect debugging of each primary failure induces a secondary failure which is assumed to occur in a delayed sense from the occurrence time of the primary failure. The mean total number of failures, comprising the primary and secondary failures, is obtained. The authors also develop a cost model and consider some optimal release-policies based on the model. Parameters are estimated using maximum likelihood and a numerical example is presented  相似文献   

20.
Traditional approaches to software reliability modeling are black box-based; that is, the software system is considered as a whole, and only its interactions with the outside world are modeled without looking into its internal structure. The black box approach is adequate to characterize the reliability of monolithic, custom, built-to-specification software applications. However, with the widespread use of object oriented systems design & development, the use of component-based software development is on the rise. Software systems are developed in a heterogeneous (multiple teams in different environments) fashion, and hence it may be inappropriate to model the overall failure process of such systems using one of the several software reliability growth models (black box approach). Predicting the reliability of a software system based on its architecture, and the failure behavior of its components, is thus essential. Most of the research efforts in predicting the reliability of a software system based on its architecture have been focused on developing analytical or state-based models. However, the development of state-based models has been mostly ad hoc with little or no effort devoted towards establishing a unifying framework which compares & contrasts these models. Also, to the best of our knowledge, no attempt has been made to offer an insight into how these models might be applied to real software applications. This paper proposes a unifying framework for state-based models for architecture-based software reliability prediction. The state-based models we consider are the ones in which application architecture is represented either as a discrete time Markov chain (DTMC), or a continuous time Markov chain (CTMC). We illustrate the DTMC-based, and CTMC-based models using examples. A detailed discussion of how the parameters of each model may be estimated, and the life cycle phases when the model may be applied is also provided  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号