首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
自动驾驶软件测试技术研究综述   总被引:1,自引:0,他引:1       下载免费PDF全文
自动驾驶系统(autonomous driving system,ADS)是一种集成高精度传感器、人工智能和地图导航系统等模块的信息—物理融合系统。该类系统中的自动驾驶软件完成了从高级辅助驾驶到无人驾驶任务中关键的感知、定位、预测、规划和控制任务。随着深度学习和强化学习等人工智能技术的发展和车载硬件设备的不断升级,高级别的自动驾驶软件已经逐渐应用于多种安全攸关的场景中,保障其运行稳定性与可靠性的测试技术逐渐成为学术界和产业界的研究重点。本文在广泛调研国内外文献基础上,对自动驾驶软件测试技术进行了深入分析与梳理。结合自动驾驶软件的架构特点及系统特征,讨论了面向自动驾驶系统的仿真测试和实景测试,以及面向组件的测试技术。其中,在仿真方法方面,分析了软件仿真、半实体仿真和在环仿真等技术;在仿真对象方面,讨论了静态环境仿真、动态场景仿真、传感器仿真和车辆动力学仿真等。同时,本文介绍了当前实景测试的进展与情况,重点分析了实景测试案例中的得失优劣。在面向自动驾驶软件组件的测试技术方面,重点讨论了当前数据驱动技术在感知组件、决策规划组件,以及控制组件测试方面的进展。最后,本文总结分析了自动驾驶软件测试当前面临的挑战,并对未来自动驾驶软件测试技术的研究方向和研究重点进行了展望。  相似文献   

2.
Mining very large databases   总被引:1,自引:0,他引:1  
Ganti  V. Gehrke  J. Ramakrishnan  R. 《Computer》1999,32(8):38-45
Established companies have had decades to accumulate masses of data about their customers, suppliers, products and services, and employees. Data mining, also known as knowledge discovery in databases, gives organizations the tools to sift through these vast data stores to find the trends, patterns, and correlations that can guide strategic decision making. Traditionally, algorithms for data analysis assume that the input data contains relatively few records. Current databases however, are much too large to be held in main memory. To be efficient, the data mining techniques applied to very large databases must be highly scalable. An algorithm is said to be scalable if (given a fixed amount of main memory), its runtime increases linearly with the number of records in the input database. Recent work has focused on scaling data mining algorithms to very large data sets. The authors describe a broad range of algorithms that address three classical data mining problems: market basket analysis, clustering, and classification  相似文献   

3.
In medical information system, the data that describe patient health records are often time stamped. These data are liable to complexities such as missing data, observations at irregular time intervals and large attribute set. Due to these complexities, mining in clinical time-series data, remains a challenging area of research. This paper proposes a bio-statistical mining framework, named statistical tolerance rough set induced decision tree (STRiD), which handles these complexities and builds an effective classification model. The constructed model is used in developing a clinical decision support system (CDSS) to assist the physician in clinical diagnosis. The STRiD framework provides the following functionalities namely temporal pre-processing, attribute selection and classification. In temporal pre-processing, an enhanced fuzzy-inference based double exponential smoothing method is presented to impute the missing values and to derive the temporal patterns for each attribute. In attribute selection, relevant attributes are selected using the tolerance rough set. A classification model is constructed with the selected attributes using temporal pattern induced decision tree classifier. For experimentation, this work uses clinical time series datasets of hepatitis and thrombosis patients. The constructed classification model has proven the effectiveness of the proposed framework with a classification accuracy of 91.5% for hepatitis and 90.65% for thrombosis.  相似文献   

4.
ContextScientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code.ObjectiveThis study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software.MethodWe conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software.ResultsWe found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them.ConclusionsScientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.  相似文献   

5.
Physicians were interviewed about their routines in everyday use of the medical record. From the interviews, we conclude that the medical record is a well functioning working instrument for the experienced physician. Using the medical record as a basis for decision making involves interpretation of format, layout and other textural features of the type-written data. Interpretation of these features provides effective guidance in the process of searching, reading and assessing the relevance of different items of information in the record. It seems that this is a skill which is an integrated part of diagnostic expertise. This skill plays an important role in decision making based on the large amount of information about a patient, which is exhibited to the reader in the medical record. This finding has implications for the design of user interfaces for reading computerized medical records.  相似文献   

6.
Search based software testing of object-oriented containers   总被引:1,自引:0,他引:1  
Automatic software testing tools are still far from ideal for real world object-oriented (OO) software. The use of nature inspired search algorithms for this problem has been investigated recently. Testing complex data structures (e.g., containers) is very challenging since testing software with simple states is already hard. Because containers are used in almost every type of software, their reliability is of utmost importance. Hence, this paper focuses on the difficulties of testing container classes with nature inspired search algorithms. We will first describe how input data can be automatically generated for testing Java containers. Input space reductions and a novel testability transformation are presented to aid the search algorithms. Different search algorithms are then considered and studied in order to understand when and why a search algorithm is effective for a testing problem. In our experiments, these nature inspired search algorithms seem to give better results than the traditional techniques described in literature. Besides, the problem of minimising the length of the test sequences is also addressed. Finally, some open research questions are given.  相似文献   

7.
8.
MobiGuide is a ubiquitous, distributed and personalized evidence-based decision-support system (DSS) used by patients and their care providers. Its central DSS applies computer-interpretable clinical guidelines (CIGs) to provide real-time patient-specific and personalized recommendations by matching CIG knowledge with a highly-adaptive patient model, the parameters of which are stored in a personal health record (PHR). The PHR integrates data from hospital medical records, mobile biosensors, data entered by patients, and recommendations and abstractions output by the DSS. CIGs are customized to consider the patients’ psycho-social context and their preferences; shared decision making is supported via decision trees instantiated with patient utilities. The central DSS “projects” personalized CIG-knowledge to a mobile DSS operating on the patients’ smart phones that applies that knowledge locally. In this paper we explain the knowledge elicitation and specification methodologies that we have developed for making CIGs patient-centered and enabling their personalization. We then demonstrate feasibility, in two very different clinical domains, and two different geographic sites, as part of a multi-national feasibility study, of the full architecture that we have designed and implemented. We analyze usage patterns and opinions collected via questionnaires of the 10 atrial fibrillation (AF) and 20 gestational diabetes mellitus (GDM) patients and their care providers. The analysis is guided by three hypotheses concerning the effect of the personal patient model on patients and clinicians’ behavior and on patients’ satisfaction. The results demonstrate the sustainable usage of the system by patients and their care providers and patients’ satisfaction, which stems mostly from their increased sense of safety. The system has affected the behavior of clinicians, which have inspected the patients’ models between scheduled visits, resulting in change of diagnosis for two of the ten AF patients and anticipated change in therapy for eleven of the twenty GDM patients.  相似文献   

9.
The ability to reason with time-oriented data is central to the practice of medicine. Monitoring clinical variables over time often provides information that drives medical decision making (e.g., clinical diagnosis and therapy planning). Because the time-oriented patient data are often stored in electronic databases, it is important to ensure that clinicians and medical decision-support applications can conveniently find answers to their clinical queries using these databases. To help clinicians and decision-support applications make medical decisions using time-oriented data, a database-management system should (1) permit the expression of abstract, time-oriented queries, (2) permit the retrieval of data that satisfy a given set of time-oriented data-selection criteria, and (3) present the retrieved data at the appropriate level of abstraction. We impose these criteria to facilitate the expression of clinical queries and to reduce the manual data processing that users must undertake to decipher the answers to their queries. We describe a system, Tzolkin, that integrates a general method for temporal-data maintenance with a general method for temporal reasoning to meet these criteria. Tzolkin allows clinicians to use SQL-like temporal queries to retrieve both raw, time-oriented data and dynamically generated summaries of those data. Tzolkin can be used as a standalone system or as a module that serves other software systems. We implement Tzolkin with a temporal-database mediator approach. This approach is general, facilitates software reuse, and thus decreases the cost of building new software systems that require this functionality.  相似文献   

10.
从统计学的角度出发,分析比较了软件度量常用的数据分析技术以及它们的异同,对影响数据分析的因素也作了进一步说明。为软件度量实践中正确地选用数据分析技术提供指导,从而为软件开发的管理决策、项目过程监控提供了客观有效的支持。  相似文献   

11.
Analysis techniques, such as control flow, data flow, and control dependence, are used for a variety of software engineering tasks, including structural and regression testing, dynamic execution profiling, static and dynamic slicing, and program understanding. To be applicable to programs in languages such as Java and C++, these analysis techniques must account for the effects of exception occurrences and exception handling constructs; failure to do so can cause the analysis techniques to compute incorrect results and, thus, limit the usefulness of the applications that use them. This paper discusses the effects of exception handling constructs on several analysis techniques. The paper presents techniques to construct representations for programs with explicit exception occurrences-exceptions that are raised explicitly through throw statements-and exception handling constructs. The paper presents algorithms that use these representations to perform the desired analyses. The paper also discusses several software engineering applications that use these analyses. Finally, the paper describes empirical results pertaining to the occurrence of exception handling constructs in Java programs and their effect on some analysis tasks  相似文献   

12.
ContextAn accepted fact in software engineering is that software must undergo verification and validation process during development to ascertain and improve its quality level. But there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products. Though, some knowledge is available on the strengths and weaknesses of the available software quality assurance techniques but not much is known yet on the relationship between different techniques and contextual behavior of the techniques.ObjectiveThis research investigates the effectiveness of two testing techniques – equivalence class partitioning and decision coverage and one review technique – code review by abstraction, in terms of their fault detection capability. This will be used to strengthen the practical knowledge available on these techniques.MethodThe results of eight experiments conducted over 5 years to investigate the effectiveness of three techniques – code reading by stepwise abstraction, equivalence class partitioning and decision (branch) coverage were aggregated using a less rigorous aggregation process proposed during the course of this work.ResultsIt was discovered that the equivalence class partitioning and the decision coverage techniques behaved similarly in terms of fault detection capacity (and type of faults caught) based on the programs and fault classification used in the experiments. They both behaved better than the code reading by stepwise abstraction technique.ConclusionOverall, it can be deducted from the aggregation results that the equivalence class partitioning and the decision coverage techniques used are actually equally capable in terms of the type and number of faults detected. Nevertheless, more experiments is still required in this field so that this result can be verified using a rigorous aggregation technique.  相似文献   

13.
With the growing complexity of industrial software applications, industrials are looking for efficient and practical methods to validate the software. This paper develops a model‐based statistical testing approach that automatically generates online and offline test cases for embedded software. It discusses an integrated framework that combines solutions for three major software testing research questions: (i) how to select test inputs; (ii) how to predict the expected results of a test; and (iii) when to stop testing software. The automatic selection of test inputs is based on a stochastic test model that accounts for the main particularity of embedded software: time sensitivity. Software test practitioners may design one or more test models when they generate random, user‐oriented, or fault‐oriented test inputs. A formal framework integrating existing and appropriate specification techniques was developed for the design of automated test oracles (executable software specifications) and the formal measurement of functional coverage. The decision to stop testing software is based on both test coverage objectives and cost constraints. This approach was tested on two representative case studies from the automotive industry. The experiment was performed at unit testing level in a simulated environment on a host personal computer (automatic test execution). The two software functionalities tested had previously been unit tested and validated using the test design approach conventionally used in the industry. Applying the proposed model‐based statistical testing approach to these two case studies, we obtained significant improvements in performing functional unit testing in a real and complex industrial context: more bugs were detected earlier and in a shorter time. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
In order to address the rapidly increasing load of air traffic operations, innovative algorithms and software systems must be developed for the next generation air traffic control. Extensive verification of such novel algorithms is key for their adoption by industry. Separation assurance algorithms aim at predicting if two aircraft will get closer to each other than a minimum safe distance; if loss of separation is predicted, they also propose a change of course for the aircraft to resolve this potential conflict. In this paper, we report on our work towards developing an advanced testing framework for separation assurance. Our framework supports automated test case generation and testing, and defines test oracles that capture algorithm requirements. We discuss three different approaches to test-case generation, their application to a separation assurance prototype, and their respective strengths and weaknesses. We also present an approach for statistical analysis of the large numbers of test results obtained from our framework.  相似文献   

15.
Interprocedural data flow information is useful for many software testing and analysis techniques, including data flow testing, regression testing, program slicing and impact analysis. For programs with aliases, these testing and analysis techniques can yield invalid results, unless the data flow information accounts for aliasing effects. Recent research provides algorithms for performing interprocedural data flow analysis in the presence of aliases; however, these algorithms are expensive, and achieve precise results only on complete programs. This paper presents an algorithm for performing alias analysis on incomplete programs that lets individual software components such as library routines, subroutines or subsystems be independently analyzed. The paper also presents an algorithm for reusing the results of this separate analysis when the individual software components are linked with calling modules. Our algorithms let us analyze frequently used software components, such as library routines or classes, independently, and reuse the results of that analysis when analyzing calling programs, without incurring the expense of completely reanalyzing each calling program. Our algorithms also provide a way to analyze large systems incrementally  相似文献   

16.
In this article the relevance of using test methods of pattern recognition while developing intelligent systems for decision making support for various problem areas is discussed. The advantage of fault-tolerant diagnostic tests used in intelligent systems is shown, namely, a tool for registering and processing different kinds of errors in databases and knowledge bases. The results of testing two algorithms for constructing the nonredundant matrix of implications are compared; the technical particulars of program implementation are discussed such as synchronization means, test environment, test-program structure, and bottlenecks of program implementation; methods of their elimination, and further development of parallel algorithms.  相似文献   

17.
Fuzzy set theory, rough set theory and soft set theory are all generic mathematical tools for dealing with uncertainties. There has been some progress concerning practical applications of these theories, especially, the use of these theories in decision making problems. In the present article, we review some decision making methods based on (fuzzy) soft sets, rough soft sets and soft rough sets. In particular, we provide several novel algorithms in decision making problems by combining these kinds of hybrid models. It may be served as a foundation for developing more complicated soft set models in decision making.  相似文献   

18.
Today's consumer electronics must be portable, reliable at various operating environments, and power efficient. Thus, semiconductor manufacturers constantly upgrade their production technologies and incorporate intelligent circuit design techniques. With widespread advances in system integration techniques, manufacturers can bundle multiple functionalities onto a single chip, reducing the end product's form factor. However, with higher levels of integration and reduced pin count, test issues are becoming more critical. During high-volume production, variations in process parameters cause devices to vary significantly from their performance metrics, and test engineers have only limited test resources to perform at-speed testing. Generating diagnosis information is also challenging during product ramp-up, as very little information is available from the output pins about the different modules' functionalities. DFT seems to be the only viable solution in such a scenario. DFT can address various issues related to at-speed testing and high-speed test response capture by performing signal conditioning to more easily capture information at lower speeds. The authors present a method that uses embedded DC sensors at test observation nodes to simplify data capture and enhance test quality while performing at-speed tests during production testing. Experiments show that monitoring sensor outputs provides a very good estimate of complex, system-level specifications.  相似文献   

19.
This paper presents an overview of two maintenance techniques widely discussed in the literature: time-based maintenance (TBM) and condition-based maintenance (CBM). The paper discusses how the TBM and CBM techniques work toward maintenance decision making. Recent research articles covering the application of each technique are reviewed. The paper then compares the challenges of implementing each technique from a practical point of view, focusing on the issues of required data determination and collection, data analysis/modelling, and decision making. The paper concludes with significant considerations for future research. Each of the techniques was found to have unique concepts/principles, procedures, and challenges for real industrial practise. It can be concluded that the application of the CBM technique is more realistic, and thus more worthwhile to apply, than the TBM one. However, further research on CBM must be carried out in order to make it more realistic for making maintenance decisions. The paper provides useful information regarding the application of the TBM and CBM techniques in maintenance decision making and explores the challenges in implementing each technique from a practical perspective.  相似文献   

20.
This paper begins by analysing decision making activities and information requirements at three organizational levels and the characteristics of expert systems (ESs) intended for the two different roles of supporting and replacing a decision maker. It goes on to review the evidence from many years of commercial use of ESs at different levels and in different roles, and to analyse the evidence obtained from a pilot experiment involving developing ESs to fulfil two different roles in the same domain. The research finds that ESs in a replacement role prove to be effective for operational and tactical decisions, but have limitations at the strategic level. ESs in a support role, as advisory systems, can help to make better decisions, but their effectiveness can only be fulfilled through their users. In the experiments, an expert advisory system did not save a user's time, contrary to the expectations of many of its users, but an ES in a replacement role did improve the efficiency of decision making. In addition, the knowledge bases of the ESs in the different roles need to be different. Finally, the practical implications of the experience gained from developing and testing two types of ESs are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号