首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4篇
  免费   0篇
工业技术   4篇
  2022年   1篇
  2019年   1篇
  2018年   1篇
  2006年   1篇
排序方式: 共有4条查询结果,搜索用时 10 毫秒
1
1.

Spectrum-based fault localization (SFL) techniques have shown considerable effectiveness in localizing software faults. They leverage a ranking metric to automatically assign suspiciousness scores to certain entities in a given faulty program. However, for some programs, the current SFL ranking metrics lose effectiveness. In this paper, we introduce ConsilientSFL that is served to synthesize a new ranking metric for a given program, based on a customized combination of a set of given ranking metrics. ConsilientSFL can be significant since it demonstrates the usage of voting systems into a software engineering task. First, several mutated, faulty versions are generated for a program. Then, the mutated versions are executed with the test data. Next, the effectiveness of each existing ranking metric is computed for each mutated version. After that, for each mutated version, the computed existing metrics are ranked using a preferential voting system. Consequently, several top metrics are chosen based on their ranks across all mutated versions. Finally, the chosen ranking metrics are normalized and synthesized, yielding a new ranking metric. To evaluate ConsilientSFL, we have conducted experiments on 27 subject programs from Code4Bench and Siemens benchmarks. In the experiments, we found that ConsilientSFL outperformed every single ranking metric. In particular, for all programs on average, we have found performance measures recall, precision, f-measure, and percentage of code inspection, to be nearly 7, 9, 12, and 5 percentages larger than using single metrics, respectively. The impact of this work is twofold. First, it can mitigate the issue with the choice and usage of a proper ranking metric for the faulty program at hand. Second, it can help debuggers find more faults with less time and effort, yielding higher quality software.

  相似文献   
2.
Software piracy has been known as unauthorised reconstruction or illegal redistribution of a licenced software. Detecting pirated from base software is a major concern since pirated software can lead to significant financial losses as well as serious security vulnerabilities. To detect software piracy, we have recently proposed Metamorphic Analysis for Automatic Software Piracy Detection (MetaSPD) with a proof-of-concept evaluation. The core idea of MetaSPD was inspired from metamorphic malware detection due to its similarity of software piracy detection. MetaSPD works primarily based on mining the opcode graph of the base software to extract micro-signatures. Then, it leverages a classifier model to decide whether a given suspicious file is a pirated version of the base software. This paper extends our prior work in several respects. First, we present a retrospective appraisal of the main literature aiming at laying bare the status quo of software piracy detection and arguments on the current problems of the field to motivate our work. We then elaborate on MetaSPD itself and the constituent components. We provide two extensive experiments to evident the effectiveness of MetaSPD. The experiments have been carried out on two different datasets. Each dataset comprises 1300 morphed variants of the respective base software that act as pirated versions of that software. We conducted our experiments using three different classifiers. The paper is also enriched with a detailed discussion of the different properties and concerns of MetaSPD. The results corroborate that an attacker, who is using a pirated version of the given software, can hardly hide illegal usage of the software even by applying superabundant obfuscations to the code.  相似文献   
3.
In this paper, we find the expected degree of each node in random recursive k-ary trees. The expression found for the expected value is used to determine the exact distribution of the depth of nth node. It is further shown that the limiting distribution of the normalized depth of this node is a standard normal distribution.  相似文献   
4.
In a critical environment, e.g. in a factory where an employee faces hazardous conditions, monitoring the health status of the employee is important. Thus, continuous connectivity of the employee to the network is the main concern of such networks. In this paper, we have proposed a decentralized approach for mobility management of mobile nodes in hazardous areas like factory. The proposed mobility structure for hazardous areas, called MSHA, organizes static nodes as a tree for an efficient routing, automatic addressing, and handling movement of mobile nodes. MSHA is capable of handling multiple failures of static nodes which disconnect a mobile node from the network. MSHA is highly scalable regarding the number of mobile nodes and the size of the covered monitoring area. The proposed scheme is evaluated based on different factors. The results reveal the superiority of MSHA compared with the previous works. The promising analytical results manifest the performance (about 20%) of MSHA specifically in reducing packet loss and hand-off delay caused by the failure of the static nodes. The performance does not degrade with increasing the number of mobile nodes.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号