首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
2.
A new approach to estimating the fault-tolerance of the parallel control computing systems relies on the mathematical model-based determination of the probability of successful completion in a given schedule time of an arbitrary set of interdependent jobs (tasks) with random times of job execution and asynchronous job redundancy. The estimates were determined both for the standard execution of a set of tasks and for the case of single malfunction (fault or failure) of any computing system processor detected at execution of any job from the set. The basic distinction of this approach lies in that here the numerical values of the reliability parameters (probabilities or intensities of faults or failures) of the computing resources are neither given nor used.  相似文献   

3.
Electrical wiring interconnection system (EWIS) of civil aircraft has been paid more attention in recent years, and intermittent failure detection of electrical connectors in EWIS is a challenging problem. This paper presents a sliding mode observer (SMO) approach for the intermittent failure detection of an aircraft electrical system with multiple connector failures. The mathematical model of the aircraft electrical system which contains multiple connector failures is established for transforming the intermittent failure detection problem into observer-based multiplicative faults isolation and estimation problems. A set of adaptive sliding mode observers are designed to locate the failure connectors preliminarily, the observers can adapt the unknown upper bound of the faults. Furthermore, a fault-reconstruction scheme applying the equivalent output error injection principle is proposed for fault estimation, where the characteristic parameters of connecters are reconstructed to identify the failures. Finally, a numerical example is provided to show the effectiveness of the proposed method.  相似文献   

4.
Adaptive compensation for infinite number of actuator failures or faults   总被引:1,自引:0,他引:1  
It is both theoretically and practically important to investigate the problem of accommodating infinite number of actuator failures or faults in controlling uncertain systems. However, there is still no result available in developing adaptive controllers to address this problem. In this paper, a new adaptive failure/fault compensation control scheme is proposed for parametric strict feedback nonlinear systems. The techniques of nonlinear damping and parameter projection are employed in the design of controllers and parameter estimators, respectively. It is proved that the boundedness of all closed-loop signals can still be ensured in the case with infinite number of failures or faults, provided that the time interval between two successive changes of failure/fault pattern is bounded below by an arbitrary positive number. The performance of the tracking error in the mean square sense with respect to the frequency of failure/fault pattern changes is also established. Moreover, asymptotic tracking can be achieved when the total number of failures and faults is finite.  相似文献   

5.
The benefits of the analysis of software faults and failures have been widely recognized. However, detailed studies based on empirical data are rare. In this paper, we analyze the fault and failure data from two large, real-world case studies. Specifically, we explore: 1) the localization of faults that lead to individual software failures and 2) the distribution of different types of software faults. Our results show that individual failures are often caused by multiple faults spread throughout the system. This observation is important since it does not support several heuristics and assumptions used in the past. In addition, it clearly indicates that finding and fixing faults that lead to such software failures in large, complex systems are often difficult and challenging tasks despite the advances in software development. Our results also show that requirement faults, coding faults, and data problems are the three most common types of software faults. Furthermore, these results show that contrary to the popular belief, a significant percentage of failures are linked to late life cycle activities. Another important aspect of our work is that we conduct intra- and interproject comparisons, as well as comparisons with the findings from related studies. The consistency of several main trends across software systems in this paper and several related research efforts suggests that these trends are likely to be intrinsic characteristics of software faults and failures rather than project specific.  相似文献   

6.
Context: Testing highly-configurable software systems is challenging due to a large number of test configurations that have to be carefully selected in order to reduce the testing effort as much as possible, while maintaining high software quality. Finding the smallest set of valid test configurations that ensure sufficient coverage of the system’s feature interactions is thus the objective of validation engineers, especially when the execution of test configurations is costly or time-consuming. However, this problem is NP-hard in general and approximation algorithms have often been used to address it in practice.Objective: In this paper, we explore an alternative exact approach based on constraint programming that will allow engineers to increase the effectiveness of configuration testing while keeping the number of configurations as low as possible.Method: Our approach consists in using a (time-aware) minimization algorithm based on constraint programming. Given the amount of time, our solution generates a minimized set of valid test configurations that ensure coverage of all pairs of feature values (a.k.a. pairwise coverage). The approach has been implemented in a tool called PACOGEN.Results: PACOGEN was evaluated on 224 feature models in comparison with the two existing tools that are based on a greedy algorithm. For 79% of 224 feature models, PACOGEN generated up to 60% fewer test configurations than the competitor tools. We further evaluated PACOGEN in the case study of an industrial video conferencing product line with a feature model of 169 features, and found 60% fewer configurations compared with the manual approach followed by test engineers. The set of test configurations generated by PACOGEN decreased the time required by test engineers in manual test configuration by 85%, increasing the feature-pairs coverage at the same time.Conclusion: Our experimental evaluation concluded that optimal time-aware minimization of pairwise-covering test configurations is efficiently addressed using constraint programming techniques.  相似文献   

7.
Automated test case selection for a new product in a product line is challenging due to several reasons. First, the variability within the product line needs to be captured in a systematic way; second, the reusable test cases from the repository are required to be identified for testing a new product. The objective of such automated process is to reduce the overall effort for selection (e.g., selection time), while achieving an acceptable level of the coverage of testing functionalities. In this paper, we propose a systematic and automated methodology using a feature model for testing (FM_T) to capture commonalities and variabilities of a product line and a component family model for testing (CFM_T) to capture the overall structure of test cases in the repository. With our methodology, a test engineer does not need to manually go through the repository to select a relevant set of test cases for a new product. Instead, a test engineer only needs to select a set of relevant features using FM_T at a higher level of abstraction for a product and a set of relevant test cases will be selected automatically. We evaluated our methodology via three different ways: (1) We applied our methodology to a product line of video conferencing systems called Saturn developed by Cisco, and the results show that our methodology can reduce the selection effort significantly; (2) we conducted a questionnaire-based study to solicit the views of test engineers who were involved in developing FM_T and CFM_T. The results show that test engineers are positive about adapting our methodology and models (FM_T and CFM_T) in their current practice; (3) we conducted a controlled experiment with 20 graduate students to assess the performance (i.e., cost, effectiveness and efficiency) of our automated methodology as compared to the manual approach. The results showed that our methodology is cost-effective as compared to the manual approach, and at the same time, its efficiency is not affected by the increased complexity of products.  相似文献   

8.
A number of current control systems for aircraft have been specified with statecharts. The risk of failures requires the use of a formal testing approach to ensure that all possible faults are considered. However, testing the compliance of an implementation of a system to its specification is dependent on the specification method and little work has been reported relating to the use of statechart-specific methods. This paper describes a modification of a formal testing method for extended finite-state machines to handle the above problem. The method allows one to demonstrate correct behaviour of an implementation of some system, with respect to its specification, provided certain specific requirements for both of them are satisfied. The case study illustrates these and shows the applicability of the method. By considering the process used to develop the system it is possible to reduce the size of the test set dramatically; the method to be described is easy to automate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
A major cause of failures in large database management systems (DBMS) is operator/administrator faults. Although most of the complex DBMS available today have comprehensive recovery mechanisms, the effectiveness of these mechanisms is difficult to characterize. On the other hand, the tuning of a large database is very complex and database administrators tend to concentrate on performance tuning and disregard the recovery mechanisms. Above all, database administrators seldom have feedback on how good a given configuration is concerning recovery. This paper proposes an experimental approach to characterize both the performance and the recoverability of DBMS. Our approach is presented through a concrete example of benchmarking the performance and recovery of an Oracle DBMS running the standard TPC-C benchmark, extended to include two new elements: a faultload based on operator faults and measures related to recoverability. A classification of operator/administrator faults in DBMS is proposed. A set of tools have been designed and built to reproduce operator faults in an Oracle 8i DBMS, using exactly the same means used in the field by the real database administrator. This experimental approach is generic (i.e., can be applied to any DBMS) and is fully automatic. The paper ends with the discussion of the results and the proposal of guidelines to help database administrators in finding the balance between performance and recovery tuning.  相似文献   

10.
Software health management (SWHM) is an emerging field which addresses the critical need to detect, diagnose, predict, and mitigate adverse events due to software faults and failures. These faults could arise for numerous reasons including coding errors, unanticipated faults or failures in hardware, or problematic interactions with the external environment. This paper demonstrates a novel approach to software health management based on a rigorous Bayesian formulation that monitors the behavior of software and operating system, performs probabilistic diagnosis, and provides information about the most likely root causes of a failure or software problem. Translation of the Bayesian network model into an efficient data structure, an arithmetic circuit, makes it possible to perform SWHM on resource-restricted embedded computing platforms as found in aircraft, unmanned aircraft, or satellites. SWHM is especially important for safety critical systems such as aircraft control systems. In this paper, we demonstrate our Bayesian SWHM system on three realistic scenarios from an aircraft control system: (1) aircraft file-system based faults, (2) signal handling faults, and (3) navigation faults due to inertial measurement unit (IMU) failure or compromised Global Positioning System (GPS) integrity. We show that the method successfully detects and diagnoses faults in these scenarios. We also discuss the importance of verification and validation of SWHM systems.  相似文献   

11.
Detecting, locating and repairing faults is a hard task. This holds especially in cases where dependent failures occur in practice. In this paper we present a methodology which is capable of handling dependent failures. For this purpose we extend the model-based diagnosis approach by explicitely representing knowledge about such dependencies which are stored in a failure dependency graph. Beside the theoretical foundations we present algorithms for computing diagnoses and repair actions that are based on these extensions. Moreover, we introduce a case study which makes use of a larger control program of an autonomous and mobile robot. The case study shows that the proposed approach can be effectively used in practice.  相似文献   

12.
13.
姚志强 《自动化博览》2013,(11):80-81,95
长输油气管道上越来越多的使用智能压力变送器,用来进行压力的远传最示和控制。智能压力变送器在实际运行中出现故障的形式多样,本文从压力变送器的故障判断思路人手,对几起智能压力变送器故障处理过程进行了分析,结合输油气管道上的压力变送器信号回路的实际情况,总结出了输油气管道SCADA系统压力仪表故障判断流程图,经过输油气管道仪表专业工程技术和维护人员的实际应用,极大地提高了处理压力仪表故障的准确性和时效性,取得了较好的应用效果。  相似文献   

14.
Test set size in terms of the number of test cases is an important consideration when testing software systems. Using too few test cases might result in poor fault detection and using too many might be very expensive and suffer from redundancy. We define the failure rate of a program as the fraction of test cases in an available test pool that result in execution failure on that program. This paper investigates the relationship between failure rates and the number of test cases required to detect the faults. Our experiments based on 11 sets of C programs suggest that an accurate estimation of failure rates of potential fault(s) in a program can provide a reliable estimate of adequate test set size with respect to fault detection and should therefore be one of the factors kept in mind during test set construction. Furthermore, the model proposed herein is fairly robust to incorrect estimations in failure rates and can still provide good predictive quality. Experiments are also performed to observe the relationship between multiple faults present in the same program using the concept of a failure rate. When predicting the effectiveness against a program with multiple faults, results indicate that not knowing the number of faults in the program is not a significant concern, as the predictive quality is typically not affected adversely.  相似文献   

15.
Design patterns are used extensively in the design of software systems. Patterns codify effective solutions for recurring design problems and allow software engineers to reuse these solutions, tailoring them appropriately to their particular applications, rather than reinventing them from scratch. In this paper, we consider the following question: How can system designers and implementers test whether their systems, as implemented, are faithful to the requirements of the patterns used in their design? A key consideration underlying our work is that the testing approach should enable us, in testing whether a particular pattern P has been correctly implemented in different systems designed using P, to reuse the common parts of this effort rather than having to do it from scratch for each system. Thus in the approach we present, corresponding to each pattern P, there is a set of pattern test case templates (PTCTs). A PTCT codifies a reusable test case structure designed to identify defects associated with applications of P in all systems designed using P. Next we present a process using which, given a system designed using P, the system tester can generate a test suite from the PTCTs for P that can be used to test the particular system for bugs in the implementation of P in that system. This allows the tester to tailor the PTCTs for P to the needs of the particular system by specifying a set of specialization rules that are designed to reflect the scenarios in which the defects codified in this set of PTCTs are likely to manifest themselves in the particular system. We illustrate the approach using the Observer pattern.  相似文献   

16.
Current usability evaluation methods are essentially holistic in nature. However, engineers that apply a component-based software engineering approach might also be interested in understanding the usability of individual parts of an interactive system. This paper examines the efficiency dimension of usability by describing a method, which engineers can use to test, empirically and objectively, the physical interaction effort to operate components in a single device. The method looks at low-level events, such as button clicks, and attributes the physical effort associated with these interaction events to individual components in the system. This forms the basis for engineers to prioritise their improvement effort. The paper discusses face validity, content validity, criterion validity, and construct validity of the method. The discussion is set within the context of four usability tests, in which 40 users participated to evaluate the efficiency of four different versions of a mobile phone. The results of the study show that the method can provide a valid estimation of the physical interaction event effort users made when interacting with a specific part of a device.  相似文献   

17.
RELAY is a model of faults and failures that defines failure conditions, which describe test data for which execution will guarantee that a fault originates erroneous behavior that also transfers through computations and information flow until a failure is revealed. This model of fault detection provides a framework within which other testing criteria's capabilities can be evaluated. Three test data selection criteria that detect faults in six fault classes are analyzed. This analysis shows that none of these criteria is capable of guaranteeing detection for these fault classes and points out two major weaknesses of these criteria. The first weakness is that the criteria do not consider the potential unsatisfiability of their rules. Each criterion includes rules that are sufficient to cause potential failures for some fault classes, yet when such rules are unsatisfiable, many faults may remain undetected. Their second weakness is failure to integrate their proposed rules  相似文献   

18.
与超级计算机的快速的开发,规模和复杂性曾经正在增加,并且可靠性和跳回面临更大的挑战。在容错有许多重要技术,例如基于差错预言的积极失败回避技术,反应容错基于检查点,和安排技术到改进可靠性。系统差错的特征上的质、量的描述为这些技术是很批评的。这研究在超级计算机把 Sunway BlueLight 称为的二典型 petascale 上分析失败的来源(基于多核心中央处理器) 并且 Sunway TaihuLight (基于异构的 manycore 中央处理器) 。它揭开一些有趣的差错特征并且在主要部件差错之中发现未知关联关系。最后,纸在资源和不同时间跨度的各种各样的谷物分析二台超级计算机的失败时间,并且为 petascale 超级计算机造一个一致多维的失败时间模型。  相似文献   

19.
System availability is a major performance concern in distributed systems design and analysis. A typical kind of application on distributed systems has a homogeneously distributed software/hardware structure. That is, identical copies of distributed application software run on the same type of computers. In this paper, the system availability for this type of system is studied. Such a study is useful when studying optimal testing time or testing resource allocation. We consider both the case of simple two-host system, and also the more general case of multi-host system. A Markov model is developed and equations are derived to obtain the steady-state availability. Both software and hardware failures are considered, assuming that software faults are constantly being identified and removed upon a failure. Although a specific model for software reliability is used for illustration, the approach is a general one. Comparisons show that system availability changes in a similar way to single-host based software/hardware systems. Sensitivity analysis is also presented. In addition, the assumptions used in this paper are discussed.  相似文献   

20.
Intermittent faults are the largest source of failure for digital systems. In order to provide an engineer with a method for detecting intermittent faults, a birth-death model of the intermittent fault process is developed. This model is continuous in time and considers the case of multiple intermittent faults. It subsumes several models already reported in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号