首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
With rapid development in software technology, more and more safety‐critical systems are software intensive. Safety issues become important when software is used to control such systems. However, there are 2 important problems in software safety analysis: (1) there is often a significant traceability gap between safety requirements and software design, resulting in safety analysis and software design are often conducted separately; and (2) the growing complexity of safety‐critical software makes it difficult to determine whether software design fulfills safety requirements. In this paper, we propose a technique to address the above 2 important problems on the model level. The technique is based on statecharts, which are used to model the behavior of software, and fault tree safety analysis. This technique contains the following 2 parts, which are corresponding to the 2 problems, respectively. The first part is to build a metamodel of traceability between fault trees and statecharts, which is to bridge their traceability gap. A collection of rules for the creation and maintenance of traceability links is provided. The second part is a model slicing technique to reduce the complexity of statecharts with respect to the traceability information. The slicing technique can deal with the characteristics of hierarchy, concurrency, and synchronization of statecharts. The reduced statecharts are much smaller than their original statecharts, which are helpful to successive safety analysis. Finally, we illustrate the effectiveness and the importance of the method by a case study of slats and flaps control units in flight control systems.  相似文献   

2.
ContextObject-oriented (OO) development method is a popular paradigm in developing target systems. However, the current practices of OO analysis and design (OOAD) and implementation largely rely on human developers’ experience and expertise, making it possible less efficient and more error-prone. Hence, there is room for improving the development efficiency while preserving high quality of programs.ObjectiveModel-driven development (MDD) is a promising approach to developing programs by machine-assisted model transformation, saving human efforts and reducing the possibility of introducing program faults. Hence, it is appealing to apply key disciplines of MDD in developing OO programs.MethodIn this paper, we propose a comprehensive framework for applying MDD on OO program engineering in a rigorous and formal fashion. The framework consists of: (1) a hybrid engineering model of human and machine, (2) meta-models of OOAD artifacts, (3) traceability map with trace links, and (4) transformation rules.ResultsWe identified five platform independent models and two platform specific models, and defined formal representations for them. We identified 16 traceability links and accordingly 16 transformation rules among the eight artifacts. Through the case study, we showed that our work is feasible and applicable. We assessed our work and concluded that our work is sound, complete, and extendable. Our work established the foundation toward automatic generation of OO programs based on the traceability framework.ConclusionIt is concluded that it is essential to identify the OOAD artifacts, traceability links, and transformation rules for automatic generation of OO programs. It is also important to understand the human involvement nature in MDD and to explicitly treat them in the model transformation.  相似文献   

3.
ContextCertification of safety–critical software systems requires submission of safety assurance documents, e.g., in the form of safety cases. A safety case is a justification argument used to show that a system is safe for a particular application in a particular environment. Different argumentation strategies (informal and formal) are applied to determine the evidence for a safety case. For critical software systems, application of formal methods is often highly recommended for their safety assurance.ObjectiveThe objective of this paper is to propose a methodology that combines two activities: formalisation of system safety requirements of critical software systems for their further verification as well as derivation of structured safety cases from the associated formal specifications.MethodWe propose a classification of system safety requirements in order to facilitate the mapping of informally defined requirements into a formal model. Moreover, we propose a set of argument patterns that aim at enabling the construction of (a part of) a safety case from a formal model in Event-B.ResultsThe results reveal that the proposed classification-based mapping of safety requirements into formal models facilitates requirements traceability. Moreover, the provided detailed guidelines on construction of safety cases aim to simplify the task of the argument pattern instantiation for different classes of system safety requirements. The proposed methodology is illustrated by numerous case studies.ConclusionFirstly, the proposed methodology allows us to map the given system safety requirements into elements of the formal model to be constructed, which is then used for verification of these requirements. Secondly, it guides the construction of a safety case, aiming to demonstrate that the safety requirements are indeed met. Consequently, the argumentation used in such a constructed safety case allows us to support it with formal proofs and model checking results used as the safety evidence.  相似文献   

4.
ContextIt is challenging to develop comprehensive, consistent, analyzable requirements models for evolving requirements. This is particularly critical for certain highly interactive types of socio-technical systems that involve a wide range of stakeholders with disparate backgrounds; system success is often dependent on how well local social constraints are addressed in system design.ObjectiveThis paper describes feasibility research, combining a holistic social system perspective provided by Activity Theory (AT), a psychological paradigm, with existing system development methodologies and tools, specifically goal and scenario modeling.MethodAT is used to understand the relationships between a system, its stakeholders, and the system’s evolving context. The User Requirements Notation (URN) is used to produce rigorous, analyzable specifications combining goal and scenario models. First, an AT language was developed constraining the framework for automation, second consistency heuristics were developed for constructing and analyzing combined AT/URN models, third a combined AT/URN methodology was developed, and consequently applied to a proof-of-concept system.ResultsAn AT language with limited tool support was developed, as was a combined AT/URN methodology. This methodology was applied to an evolving disease management system to demonstrate the feasibility of adapting AT for use in system development with existing methodologies and tools. Bi-directional transformations between the languages allow proposed changes in system design to be propagated to AT models for use in stakeholder discussions regarding system evolution.ConclusionsThe AT framework can be constrained for use in requirements elicitation and combined with URN tools to provide system designs that include social system perspectives. The developed AT/URN methodology can help engineers to track the impact on system design due to requirement changes triggered by changes in the system’s social context. The methodology also allows engineers to assess the impact of proposed system design changes on the social elements of the system context.  相似文献   

5.

We propose a framework for requirement-driven test generation that combines contract-based interface theories with model-based testing. We design a specification language, requirement interfaces, for formalizing different views (aspects) of synchronous data-flow systems from informal requirements. Various views of a system, modeled as requirement interfaces, are naturally combined by conjunction. We develop an incremental test generation procedure with several advantages. The test generation is driven by a single requirement interface at a time. It follows that each test assesses a specific aspect or feature of the system, specified by its associated requirement interface. Since we do not explicitly compute the conjunction of all requirement interfaces of the system, we avoid state space explosion while generating tests. However, we incrementally complete a test for a specific feature with the constraints defined by other requirement interfaces. This allows catching violations of any other requirement during test execution, and not only of the one used to generate the test. This framework defines a natural association between informal requirements, their formal specifications, and the generated tests, thus facilitating traceability. Finally, we introduce a fault-based test-case generation technique, called model-based mutation testing, to requirement interfaces. It generates a test suite that covers a set of fault models, guaranteeing the detection of any corresponding faults in deterministic systems under test. We implemented a prototype test generation tool and demonstrate its applicability in two industrial use cases.

  相似文献   

6.
This paper discusses methodological design and evaluation frameworks that appear to have general applicability. The design methodology has specific relevance for the design of systemic process aids to planning and decisionmaking and, potentially, to other system design efforts as well. A five-phase iterative methodology is suggested. The paper discusses objectives for systemic process aids, requirements to be accomplished in each of the five phases of the design methodology, and leadership requirement considerations as they affect the design of systemic process aids realized by use of the methodological design framework. A framework for evaluation of systemic aids is also presented. The resulting evaluation methodology may be incorporated into the design methodology or used independently to evaluate existing or proposed aids for planning, forecasting and decision support.  相似文献   

7.
追踪性即关联一些制品及其中各种相关要素的机制或能力。安全关键系统开发不仅包括一般系统的开发过程,更重要的是必需要有独立的安全性分析,建立并验证系统的安全性需求。目前针对安全性分析过程的追踪性研究较少。安全相关标准如ARP-4761和DO 178C等提供了安全性分析过程的指导意见,然而其由于涉及的概念和方法很多,因此在实际应用和研究中常会忽略对一些关键信息的追踪。此外,软件安全性需求分析不仅应考虑系统到软件的安全性分析,还应考虑软件到系统的安全性分析。面向软件安全性需求分析过程建立安全性相关信息的双向追踪,有助于了解安全性需求的前因后果,为验证工作和影响分析提供便利。参照标准,构建面向软件安全性需求分析过程的追踪模型。  相似文献   

8.
Test suites are a valuable source of up-to-date documentation as developers continuously modify them to reflect changes in the production code and preserve an effective regression suite. While maintaining traceability links between unit test and the classes under test can be useful to selectively retest code after a change, the value of having traceability links goes far beyond this potential savings. One key use is to help developers better comprehend the dependencies between tests and classes and help maintain consistency during refactoring. Despite its importance, test-to-code traceability is not common in software development and, when needed, traceability information has to be recovered during software development and evolution. We propose an advanced approach, named SCOTCH+ (Source code and COncept based Test to Code traceability Hunter), to support the developer during the identification of links between unit tests and tested classes. Given a test class, represented by a JUnit class, the approach first exploits dynamic slicing to identify a set of candidate tested classes. Then, external and internal textual information associated with the classes retrieved by slicing is analyzed to refine this set of classes and identify the final set of candidate tested classes. The external information is derived from the analysis of the class name, while internal information is derived from identifiers and comments. The approach is evaluated on five software systems. The results indicate that the accuracy of the proposed approach far exceeds the leading techniques found in the literature.  相似文献   

9.
PurposeManaging processed food products’ safety and recall is a challenge for industry and governments. Contaminated food items can create a significant public health hazard with potential for acute and chronic food borne illnesses. This industry study examines the challenges companies face while managing a processed food recall situation and devise a responsive and reliable knowledge management framework for product safety and recall supply chain for the focal global manufacturing and distribution enterprise.MethodDrawing upon the knowledge management and product safety and recall literature and reliability engineering theory, this study uses a holistic single case based approach to develop a knowledge management framework with Failure Mode Effects and Criticality Analysis (FMECA) decision model. This knowledge management decision framework facilitates analysis of the root causes for each potential major recall issue and assesses the reliability of the product safety and recall supply chain system and its critical components.ResultsThe main reasons highlighted for a recall and associated failure modes in a knowledge management framework are to devise appropriate deployment of resources, technology and procedures to recall supply chain. This study underscores specific information described by managers of a global processed food manufacturer and their perspectives about the product safety and recall process, and its complexities. Full scale implementation of product safety and recall supply chain in the proposed knowledge management framework after the current pilot study will be carried out eventually through expert systems. This operational system when fully implemented will capture the essence of decision making environments comprising goals and objectives, courses of action, resources, constraints, technology and procedures.ImplicationsThe study recognizes the significance of communication, integration, failsafe knowledge management process design framework, leveraging technology such as Radio Frequency Identification (RFID) within all levels of supply chain for product traceability and the proactive steps to help companies successfully manage a recall process and also reestablish trust among the consumers. The proposed knowledge management framework can also preempt product recall by acting as an early warning system. A formal knowledge management framework will enable a company’s knowledge be cumulative for product safety and recall and serve as an important integrating and coordinating role for the organization.  相似文献   

10.
ContextThis research deals with requirements elicitation technique selection for software product requirements and the overselection of open interviews.ObjectivesThis paper proposes and validates a framework to help requirements engineers select the most adequate elicitation techniques at any time.MethodWe have explored both the existing underlying theory and the results of empirical research to build the framework. Based on this, we have deduced and put together justified proposals about the framework components. We have also had to add information not found in theoretical or empirical sources. In these cases, we drew on our own experience and expertise.ResultsA new validated approach for requirements technique selection. This new approach selects techniques other than open interview, offers a wider range of possible techniques and captures more requirements information.ConclusionsThe framework is easily extensible and changeable. Whenever any theoretical or empirical evidence for an attribute, technique or adequacy value is unearthed, the information can be easily added to the framework.  相似文献   

11.
This paper addresses the traceability management in the context of Accord|UML, a MARTE-based methodology for designing distributed real-time embedded systems. Our contribution is two fold: on the one hand, we propose to include directly requirements in the modeling process; on the other hand, we identify potential traceability links that we model by using the SysML requirement profile. We also present the toolbox that supports our contribution. This work is partly funded by the French Research Agency (ANR) in the context of the “Réseau National des Technologies Logicielles” support within both MeMVaTEx and Domino Projects.  相似文献   

12.
Static slicing has shown itself to be a valuable tool, facilitating the verification of hardware designs. In this paper, we present a sharpened notion, antecedent conditioned slicing that provides a more effective abstraction for reducing the size of the state space. In antecedent conditioned slicing, extra information from the antecedent is used to permit greater pruning of the state space. In a previous version of this paper, we applied antecedent conditioned slicing to safety properties of the form G(antecedentconsequent) where antecedent and consequent were written in propositional logic. In this paper, we use antecedent conditioned slicing to handle safety and bounded liveness property specifications written in linear time temporal logic. We present a theoretical justification of our technique. We provide experimental results on a Verilog RTL implementation of the USB 2.0 functional core, which is a large design with about 1,100 state elements (10331 states). The results demonstrate that the technique provides significant performance benefits over static program slicing using state-of-the-art model checkers.  相似文献   

13.
Prognostic and health management (PHM) describes a set of capabilities that enable to detect anomalies, diagnose faults and predict remaining useful lifetime (RUL), leading to the effective and efficient maintenance and operation of assets such as aircraft. Prior research has considered the methodological factors of PHM system design, but typically, only one or a few aspects are addressed. For example, several studies address system engineering (SE) principles for application towards PHM design methodology, and a concept of requirements from a theoretical standpoint, while other papers present requirement specification and flow-down approaches for PHM systems. However, the state of the art lacks a systematic methodology that formulates all aspects of designing and comprehensively engineering a PHM system. Meanwhile, the process and specific implementation of capturing stakeholders’ expectations and requirements are usually lacking details. To overcome these drawbacks, this paper proposes a stakeholder-oriented design methodology for developing a PHM system from a systems engineering perspective, contributing to a consistent and reusable representation of the design. Further, it emphasizes the process and deployment of stakeholder expectations definition in detail, involving the steps of identifying stakeholders, capture their expectations/requirements, and stakeholder and requirement analysis. Two case studies illustrate the applicability of the proposed methodology. The proposed stakeholder-oriented design methodology enables the integration of the bespoke main tasks to design a PHM system, in which sufficient stakeholder involvement and consideration of their interests can lead to more precise and better design information. Moreover, the methodology comprehensively covers the aspects of traceability, consistency, and reusability to capture and define stakeholders and their expectations for a successful design.  相似文献   

14.
In the industry, communicating automata specifications are mainly used in fields where the reliability requirements are high, as this formalism allow the use of powerful validation tools. Still, on large scale industrial specifications, formal methods suffer from the combinatorial explosion phenomenon. In our contribution, we suggest to try to bypass this phenomenon, in applying slicing techniques preliminarily to the targeted complex analysis. This analysis can thus be performed a posteriori on a reduced (or sliced) specification, which is potentially less exposed to combinatorial explosion. The slicing method is based on dependence relations, defined on the specification under analysis, and is mainly founded on the literature on compiler construction and program slicing. A theoretical framework is described, for static analyses of communicating automata specifications. This includes formal definitions for the aforementioned dependence relations, and for a slice of a specification with respect to a slicing criterion. Efficient algorithms are also described in detail, for calculating dependence relations and specification slices. Each of these algorithms has been shown to be polynomial, and sound and complete with respect to its respective definition. These algorithms have also been implemented in a slicing tool, named Carver, that has shown to be operational in specification debugging and understanding. The experimental results obtained in model reduction with this tool are promising, notably in the area of formal validation and verification methods, e.g.model checking, test case generation.  相似文献   

15.
In order to face new regulation directives regarding the environment and also for improving their customer relationship, enterprises have to increasingly be more able to manage their product information during the entire lifecycle. One of the objectives among others in this paper is to deal with product traceability along the product lifecycle. To meet this objective, the information system has to be designed and, further built in such a way all information regarding products is recorded. The IEC 62264 standards define generic logical models for exchanging product and process information between business and manufacturing levels of enterprise applications. Thus, it can be a base for product information traceability. However, its complexity comes from the fact it mixes conceptual and implementation details while no methodology exists that defines how to instantiate it. Product traceability is then needed to increase its abstraction level in order to concentrate on its concepts and managing its application by providing a methodology for its instantiation. In this paper, we propose to map the IEC 62264 standard models to a particular view of Zachman framework in order to make the framework concrete as a guideline for applying the standard and for providing the key players in information systems design with a methodology to use the standard for traceability purposes.  相似文献   

16.
ContextThe increasing adoption of process-aware information systems (PAISs) such as workflow management systems, enterprise resource planning systems, or case management systems, together with the high variability in business processes (e.g., sales processes may vary depending on the respective products and countries), has resulted in large industrial process model repositories. To cope with this business process variability, the proper management of process variants along the entire process lifecycle becomes crucial.ObjectiveThe goal of this paper is to develop a fundamental understanding of business process variability. In particular, the paper will provide a framework for assessing and comparing process variability approaches and the support they provide for the different phases of the business process lifecycle (i.e., process analysis and design, configuration, enactment, diagnosis, and evolution).MethodWe conducted a systematic literature review (SLR) in order to discover how process variability is supported by existing approaches.ResultsThe SLR resulted in 63 primary studies which were deeply analyzed. Based on this analysis, we derived the VIVACE framework. VIVACE allows assessing the expressiveness of a process modeling language regarding the explicit specification of process variability. Furthermore, the support provided by a process-aware information system to properly deal with process model variants can be assessed with VIVACE as well.ConclusionsVIVACE provides an empirically-grounded framework for process engineers that enables them to evaluate existing process variability approaches as well as to select that variability approach meeting their requirements best. Finally, it helps process engineers in implementing PAISs supporting process variability along the entire process lifecycle.  相似文献   

17.
ContextThe intensive human effort needed to manually manage traceability information has increased the interest in using semi-automated traceability recovery techniques. In particular, Information Retrieval (IR) techniques have been largely employed in the last ten years to partially automate the traceability recovery process.AimPrevious studies mainly focused on the analysis of the performances of IR-based traceability recovery methods and several enhancing strategies have been proposed to improve their accuracy. Very few papers investigate how developers (i) use IR-based traceability recovery tools and (ii) analyse the list of suggested links to validate correct links or discard false positives. We focus on this issue and suggest exploiting link count information in IR-based traceability recovery tools to improve the performances of the developers during a traceability recovery process.MethodTwo empirical studies have been conducted to evaluate the usefulness of link count information. The two studies involved 135 University students that had to perform (with and without link count information) traceability recovery tasks on two software project repositories. Then, we evaluated the quality of the recovered traceability links in terms of links correctly and erroneously traced by the students.ResultsThe results achieved indicate that the use of link count information significantly increases the number of correct links identified by the participants.ConclusionsThe results can be used to derive guidelines on how to effectively use traceability recovery approaches and tools proposed in the literature.  相似文献   

18.
The variety of design artifacts (models) produced in a model-driven design process results in an intricate relationship between requirements and the various models. This paper proposes a methodological framework that simplifies management of this relationship, which helps in assessing the quality of models, realizations and transformation specifications. Our framework is a basis for understanding requirements traceability in model-driven development, as well as for the design of tools that support requirements traceability in model-driven development processes. We propose a notion of conformance between application models which reduces the effort needed for assessment activities. We discuss how this notion of conformance can be integrated with model transformations.  相似文献   

19.
Product semantics, the “study of the symbolic qualities of man-made forms in the context of their use, and application of this knowledge to industrial design” is an important challenge in product design. Because of subjectivity, this particular dimension of the user's need is difficult to express, to quantify and to assess. This paper presents a general approach to assess product semantics in a sound way. It is based on usability tests, and involves several classical methods in marketing and decision-making theory, as multidimensional scaling, semantic differential method, factor analysis, pairwise comparison and analytical hierarchy process. As a result, our integrated approach provides designers with a tool which helps understand and specify the semantic part of the need; it rates and ranks the new product prototypes according to their closeness to the specified “ideal product”, and it underlines the particular semantic dimensions that should be improved. To illustrate our approach, we have performed usability tests and applied our methodology to the design of table glasses. For the sake of clarity, each stage of the methodology is presented in detail on this particular example.

Relevance to industry

The integrated framework proposed in this paper can be readily deployed in companies and used at different stages of the design of products. Firstly, our methodology provides a frame for describing how a given products family is perceived by users, and for storing and up-dating these data. Secondly, the data can be used to specify target requirements for a new product by qualitative comparisons to existing products. Finally, emerging product concepts may be directly assessed with regards to the requirements in a simple qualitative and comparative way.  相似文献   


20.
ContextOne of the most important factors in the development of a software project is the quality of their requirements. Erroneous requirements, if not detected early, may cause many serious problems, such as substantial additional costs, failure to meet the expected objectives and delays in delivery dates. For these reasons, great effort must be devoted in requirements engineering to ensure that the project’s requirements results are of high quality. One of the aims of this discipline is the automatic processing of requirements for assessing their quality; this aim, however, results in a complex task because the quality of requirements depends mostly on the interpretation of experts and the necessities and demands of the project at hand.ObjectiveThe objective of this paper is to assess the quality of requirements automatically, emulating the assessment that a quality expert of a project would assess.MethodThe proposed methodology is based on the idea of learning based on standard metrics that represent the characteristics that an expert takes into consideration when deciding on the good or bad quality of requirements. Using machine learning techniques, a classifier is trained with requirements earlier classified by the expert, which then is used for classifying newly provided requirements.ResultsWe present two approaches to represent the methodology with two situations of the problem in function of the requirement corpus learning balancing, obtaining different results in the accuracy and the efficiency in order to evaluate both representations. The paper demonstrates the reliability of the methodology by presenting a case study with requirements provided by the Requirements Working Group of the INCOSE organization.ConclusionsA methodology that evaluates the quality of requirements written in natural language is presented in order to emulate the quality that the expert would provide for new requirements, with 86.1 of average in the accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号