首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 859 毫秒
1.
2.
Reconfigurable SRAM-based FPGAs are highly susceptible to radiation induced single-event upsets (SEUs) in space applications.The bit flip in FPGAs configuration memory may alter user circuit permanently without proper bitstream reparation,which is a completely different phenomenon from upsets in traditional memory devices.It is important to find the relationship between a programmable resource and corresponding control bit in order to understand the impact of this effect.In this paper,a method is proposed to decode the bitstream of FPGAs from Xilinx Corporation,and then an analysis program is developed to parse the netlist of a specific design to get the configuration state of occupied programmable logic and routings.After that,an SEU propagation rule is established according to the resource type to identify critical logic nodes and paths,which could destroy the circuit topological structure.The decoded relationship is stored in a database.The database is queried to get the sensitive bits of a specific design.The result can be used to represent the vulnerability of the system and predict the on orbit system failure rate.The analysis tool was validated through fault injection and accelerator irradiation experiment.  相似文献   

3.
To accommodate the explosively increasing amount of data in many areas such as scientific computing and e-Business, physical storage devices and control components have been separated from traditional computing systems to become a scalable, intelligent storage subsystem that, when appropriately designed, should provide transparent storage interface, effective data allocation, flexible and efficient storage management, and other impressive features. The design goals and desirable features of such a storage subsystem include high performance, high scalability, high availability, high reliability and high security. Extensive research has been conducted in this field by researchers all over the world, yet many issues still remain open and challenging. This paper studies five different online massive storage systems and one offline storage system that we have developed with the research grant support from China. The storage pool with multiple network-attached RAIDs avoids expensive store-and-forward data copying between the server and storage system, improving data transfer rate by a factor of 2-3 over a traditional disk array. Two types of high performance distributed storage systems for local-area network storage are introduced in the paper. One of them is the Virtual Interface Storage Architecture (VISA) where VI as a communication protocol replaces the TCP/IP protocol in the system. VISA's performance is shown to achieve better than that of IP SAN by designing and implementing the vSCSI (Vl-attached SCSI) protocol to support SCSI commands in the VI network. The other is a fault-tolerant parallel virtual file system that is designed and implemented to provide high I/O performance and high reliability. A global distributed storage system for wide-area network storage is discussed in detail in the paper, where a Storage Service Provider is added to provide storage service and plays the role of user agent for the storage system. Object based Storage Systems not only store data but also adopt the attributes and methods of objects that encapsulate the data. The adaptive policy triggering mechanism (APTM), which borrows proven machine learning techniques to improve the scalability of object storage systems, is the embodiment of the idea about smart storage device and facilitates the self-management of massive storage systems. A typical offline massive storage system is used to backup data or store documents, for which the tape virtualization technology is discussed. Finally, a domain-based storage management framework for different types of storage systems is presented in the paper.  相似文献   

4.
With the rapid development in network technology field, the network IP packet measuring technology has become basis of the large-scale network's behavior analysis. The paper analyzes the function and performance requirement to high-speed network measure, and the construction of Linux's network subsystem. Based on these, the paper shows a proper framework design for a high-speed meter on Linux kernel and gives out an optimized quick sort arithmetic design, which gets good balance between space and time complication. At the end of the paper, a meter archetype according to the design is introduced. And any more there are several experiment data and figures of it under high-speed CERNET network environment, which show the correctness of the framework design and the high performance the arithmetic brines.  相似文献   

5.
Existing data management tools have some limitations such as restrictions to specific file systems or shortage of transparence to applications.In this paper,we present a new data management tool called AIP,which is implemented via the standard data management API,and hence it supports multiple file systems and makes data management operations transparent to applications.First,AIP provides centralized policy-based data management for controlling the placement of files in different storage tiers.Second,AIP uses differentiated collections of file states to improve the execution efficiency of data management policies,with the help of the caching mechanism of file states.Third,AIP also provides a resource arbitration mechanism for controlling the rate of initiated data management operations.Our results from representative experiments demonstrate that AIP has the ability to provide high performance,to introduce low management overhead,and to have good scalability.  相似文献   

6.
This work introduces a scalable and efficient topological structure for tetrahedral and hexahedral meshes. The design of the data structure aims at maximal flexibility and high performance. It provides a high scalability by using hierarchical representa-tions of topological elements. The proposed data structure is array-based, and it is a compact representation of the half-edge data structure for volume elements and half-face data structure for volumetric meshes. This guarantees constant access time to the neighbors of the topological elements. In addition, an open-source implementation named Open Volumetric Mesh (OVM) of the pro-posed data structure is written in C++ using generic programming concepts.  相似文献   

7.
Interval arithmetic is an elegant tool for practical work with inequalities, approximate numbers, error bounds, and more generally with certain convex and bounded sets. In this section we give a number of simple examples showing where intervals and ranges of functions over intervals arise naturally. Interval mathematics is a generalization in which interval numbers replace real numbers, interval arithmetic replaces real arithmetic, and interval analysis replaces real analysis. Interval is limited by two bounds: lower bound and upper bound. The present paper introduces some of the basic notions and techniques from interval analysis needed in the sequel for presenting various uses of interval analysis in electric circuit theory and its applications. In this article we address the representation of uncertain and imprecise information, the interval arithmetic and its application to electrical circuits.  相似文献   

8.
With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectively handled by traditional monitoring methods such as linear discriminant analysis(LDA),principal component analysis(PCA)and partial least square(PLS)analysis.Recently,a mixed hidden naive Bayesian model(MHNBM)is developed for the first time to utilize both two-valued and continuous variables for abnormality monitoring.Although the MHNBM is effective,it still has some shortcomings that need to be improved.For the MHNBM,the variables with greater correlation to other variables have greater weights,which can not guarantee greater weights are assigned to the more discriminating variables.In addition,the conditional P(x j|x j′,y=k)probability must be computed based on historical data.When the training data is scarce,the conditional probability between continuous variables tends to be uniformly distributed,which affects the performance of MHNBM.Here a novel feature weighted mixed naive Bayes model(FWMNBM)is developed to overcome the above shortcomings.For the FWMNBM,the variables that are more correlated to the class have greater weights,which makes the more discriminating variables contribute more to the model.At the same time,FWMNBM does not have to calculate the conditional probability between variables,thus it is less restricted by the number of training data samples.Compared with the MHNBM,the FWMNBM has better performance,and its effectiveness is validated through numerical cases of a simulation example and a practical case of the Zhoushan thermal power plant(ZTPP),China.  相似文献   

9.
SDN (Software-Defined Networking) and NFV (Network Functions Virtualization) are technologies for enabling innovative network architectures. Nevertheless, a fundamental problem in instantiation of VNs (Virtual Networks), performed by NFV, is an optimal allocation of resources offered by one or more SDN domain networks. The process of instantiation of VNs is performed in several phases, including splitting and mapping algorithms. For each one of these phases, researchers have developed algorithms, being possible to obtain different results combining them. This paper introduces a modular and flexible graphical discrete event simulation tool for solving the complete virtual resource allocation in SDN domain networks problem. A Java-based tool has been developed to integrate existing and future algorithms related researchers can select the appropriate algorithm in each phase performance evaluation of the selected and proposed algorithms. to each phase of the process. The simulator is a test-bed in which and display the results in a graphical form, while obtaining a  相似文献   

10.
Recently, many countries and regions have enacted data security policies, such as the General Data Protection Regulation proposed by the EU. The release of related laws and regulations has aggravated the problem of data silos, which makes it difficult to share data among various data owners. Data federation is a possible solution to this problem. Data federation refers to the calculation of query tasks jointly performed by multiple data owners without original data leaks using privacy computing technologies such as secure multi-party computing. This concept has become a research trend in recent years, and a series of representative systems have been proposed, such as SMCQL and Conclave. However, for the core join queries in the relational database system, the existing data federation system still has the following problems. First of all, the join query type is single, which is difficult to meet the query requirements under complex join conditions. Secondly, the algorithm performance has huge improvement space because the existing systems often call the security tool library directly, which means the runtime and communication overhead is high. Therefore, this paper proposes a join algorithm under data federation to address the above issues. The main contributions of this paper are as follows: firstly, multi-party-oriented federation security operators are designed and implemented, which can support many operations. Secondly, a federated θ-join algorithm and an optimization strategy are proposed to significantly reduce the security computation cost. Finally, the performance of the algorithm proposed in this paper is verified by the benchmark dataset TPC-H. The experimental results show that the proposed algorithm can reduce the runtime and communication overhead by 61.33% and 95.26%, respectively, compared with the existing data federation systems SMCQL and Conclave.  相似文献   

11.
E.P.  U.  M.  F.   《Future Generation Computer Systems》2008,24(6):594-604
This paper deals with the performance optimization of multiple-task applications in GRID environments. Typically such applications are launched by a Resource Manager, which only takes into account the application’s resource requirements and current availability on the GRID. Here a novel approach is presented, that performs resource management in user space, making it possible to exploit application modularity and flexibility and to take into account expected performance figures produced by GRID simulation. The objective is to make optimized choices that can lead to reduced application response times. After an introduction to the GRID simulation environment used, the structure of an application launcher able to optimize a number of application tasks and their mapping on the GRID is sketched, presenting the encouraging performance results obtained.  相似文献   

12.
并行程序性能分析工具的一种主要设计思想是采用源程序们随法,而其中性能监测库是这类并行程序性能分析工具的重要组成部分,提出了玫种基于事件的并行程序性能监测库的实现技术,并给出了一个基于SVM系统的性能分析工具的性能监测库的实现方法。  相似文献   

13.
杨雷  代钰  张斌  王昊 《计算机科学》2015,42(1):47-49
多层Web应用性能分析是实现资源动态分配和管理、保证多层Web应用性能的重要因素之一.传统的多层Web应用性能分析模型往往假设服务器部署在无性能互扰的服务器环境中且忽略了逻辑资源服务能力对多层Web应用性能的影响.随着云计算的发展,底层物理资源可以通过虚拟化方式形成虚拟资源并向外提供服务,这为多层Web应用的性能保证提供了有效支撑.因此,如何考虑虚拟机性能互扰以及逻辑资源服务能力对多层Web应用性能的影响已经成为云计算环境中多层Web应用性能分析所需解决的关键问题.为此,构建了一个基于排队网的多层Web应用性能分析模型,该模型通过丢弃队列来对目前多层Web应用性能分析模型在并发数限制方面进行扩展,在考虑虚拟机间性能互扰的基础上,提出了多层Web应用性能分析模型参数求解方法.实验结果验证了所提出的多层Web应用性能分析模型的有效性.  相似文献   

14.
15.
基于Windows的综合网络性能分析系统   总被引:4,自引:3,他引:1  
网络性能分析是网络设计和网络管理的重要环节。本系统设计了一个集成的综合性网络性能分析工具,具有收集、处理、分析、存储、显示等功能。对于网络拓扑、仿真系统和实际网络都可以分析。该系统基于Windows平台,分析结果以图表方式显示,方便了网络的设计和管理。  相似文献   

16.
Performance analysis of a software specification in a language such as UML can assist a design team in evaluating performance-sensitive design decisions and in making design trade-offs that involve performance. Annotations to the design based on the UML Profile for Schedulability, Performance and Time provide necessary information such as workload parameters for a performance model, and many different kinds of performance techniques can be applied. The Core Scenario Model (CSM) described here provides a metamodel for an intermediate form which correlates multiple UML diagrams, extracts the behaviour elements with the performance annotations, attaches important resource information that is obtained from the UML, and supports the creation of many different kinds of performance models. Models can be made using queueing networks, layered queues, timed Petri nets, and it is proposed to develop the CSM as an intermediate language for all performance formalisms. This paper defines the CSM and describes how it resolves questions that arise in performance model-building.  相似文献   

17.
We present here a performance analysis of three of current architectures that have become commonplace in the High Performance Computing world. Blue Gene/Q is the third generation of systems from IBM that use modestly performing cores but at large-scale in order to achieve high performance. The XE6 is the latest in a long line of Cray systems that use a 3-D topology but the first to use its Gemini interconnection network. InfiniBand provides the flexibility of using compute nodes from many vendors that can be connected in many possible topologies. The performance characteristics of each vary vastly, and the way in which nodes are allocated in each type of system can significantly impact on achieved performance. In this work we compare these three systems using a combination of micro-benchmarks and a set of production applications. In addition we also examine the differences in performance variability observed on each system and quantify the lost performance using a combination of both empirical measurements and performance models. Our results show that significant performance can be lost in normal production operation of the Cray XE6 and InfiniBand Clusters in comparison to Blue Gene/Q.  相似文献   

18.
An increasing demand to work with electronic displays and to use mobile computers emphasises the need to compare visual performance while working with different screen types. In the present study, a cathode ray tube (CRT) was compared to an external liquid crystal display (LCD) and a Notebook-LCD. The influence of screen type and viewing angle on discrimination performance was studied. Physical measurements revealed that luminance and contrast values change with varying viewing angles (anisotropy). This is most pronounced in Notebook-LCDs, followed by external LCDs and CRTs. Performance data showed that LCD's anisotropy has negative impacts on completing time critical visual tasks. The best results were achieved when a CRT was used. The largest deterioration of performance resulted when participants worked with a Notebook-LCD. When it is necessary to react quickly and accurately, LCD screens have disadvantages. The anisotropy of LCD-TFTs is therefore considered to be as a limiting factor deteriorating visual performance.  相似文献   

19.
A feature model is a compact representation of the products of a software product line. The automated extraction of information from feature models is a thriving topic involving numerous analysis operations, techniques and tools. Performance evaluations in this domain mainly rely on the use of random feature models. However, these only provide a rough idea of the behaviour of the tools with average problems and are not sufficient to reveal their real strengths and weaknesses. In this article, we propose to model the problem of finding computationally hard feature models as an optimization problem and we solve it using a novel evolutionary algorithm for optimized feature models (ETHOM). Given a tool and an analysis operation, ETHOM generates input models of a predefined size maximizing aspects such as the execution time or the memory consumption of the tool when performing the operation over the model. This allows users and developers to know the performance of tools in pessimistic cases providing a better idea of their real power and revealing performance bugs. Experiments using ETHOM on a number of analyses and tools have successfully identified models producing much longer executions times and higher memory consumption than those obtained with random models of identical or even larger size.  相似文献   

20.
基于TPM的系统综合技术性能评估方法研究   总被引:2,自引:0,他引:2  
技术性能度量(TechnicalPerformanceMeasure)定义为评估一个系统达到其性能要求程度的度量指标,评估一个系统时可以根据对系统的要求定义若干个技术性能度量指标,这些指标反映了系统各个局部性能状况,却没有反映系统整体性能状况。本文在探讨技术性能度量(TPM)的基础上,提出了基于TPM的技术性能综合度量指标的概念,在分析的基础上提出了对TPM指标的规范化处理和指标综合的方法,结合实例进行了应用分析。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号