首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
The resource management system is the central component of distributed network computing systems. There have been many projects focused on network computing that have designed and implemented resource management systems with a variety of architectures and services. In this paper, an abstract model and a comprehensive taxonomy for describing resource management architectures is developed. The taxonomy is used to identify approaches followed in the implementation of existing resource management systems for very large‐scale network computing systems known as Grids. The taxonomy and the survey results are used to identify architectural approaches and issues that have not been fully explored in the research. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
In a cloud computing paradigm, energy efficient allocation of different virtualized ICT resources (servers, storage disks, and networks, and the like) is a complex problem due to the presence of heterogeneous application (e.g., content delivery networks, MapReduce, web applications, and the like) workloads having contentious allocation requirements in terms of ICT resource capacities (e.g., network bandwidth, processing speed, response time, etc.). Several recent papers have tried to address the issue of improving energy efficiency in allocating cloud resources to applications with varying degree of success. However, to the best of our knowledge there is no published literature on this subject that clearly articulates the research problem and provides research taxonomy for succinct classification of existing techniques. Hence, the main aim of this paper is to identify open challenges associated with energy efficient resource allocation. In this regard, the study, first, outlines the problem and existing hardware and software-based techniques available for this purpose. Furthermore, available techniques already presented in the literature are summarized based on the energy-efficient research dimension taxonomy. The advantages and disadvantages of the existing techniques are comprehensively analyzed against the proposed research dimension taxonomy namely: resource adaption policy, objective function, allocation method, allocation operation, and interoperability.  相似文献   

4.
Ekmecic  I. Tartalja  I. Milutinovic  V. 《Computer》1995,28(12):68-70
The field of heterogeneous computing is growing rapidly. New concepts and systems appear daily. Hence, it is important to fit each new contribution into its proper place in the puzzle called heterogeneous computing. This is possible only if an adequate taxonomy/classification exists, one that can show whether or not a new system is heterogeneous, and if so, what kind of heterogeneity it exhibits. We propose a new taxonomy that shows the relative position of each and every heterogeneous system in the overall computer systems world. The proposed taxonomy is intended to be both broad enough to encompass all existing heterogeneous systems and simple enough to be easily accepted. Consequently, our taxonomy includes only four classes of computer systems. We propose that computer systems be classified as follows: SESM (single execution mode, single machine model); SEMM (single execution mode, multiple machine models), MESM (multiple execution modes, single machine model), and MEMM (multiple execution modes, multiple machine models)  相似文献   

5.
Systems analysis is applied to a milieu of sports rating systems (SRSs) resulting in a unifying taxonomy. Each SRS operates in three phases: evaluation of sports performance, weighting of the evaluated performance and creation of a rating for each competitor using the weighted evaluated performance. In the first phase, sports are classified as: combat sports where each competitor tries to control the opponent, object sports where each competitor tries to control an object in direct competition with the opponent and independent sports where each competitor is unimpeded by the opponent. Each sports performance is evaluated by judging, measuring and/or scoring. In the second phase, weighting may be represented by matrix operations. In the third phase, there are two combinations of operations that classify SRSs: accumulative and adjustive approaches. Examples of SRSs are presented for boxing, track and field, golf, skiing, Olympic performances, chess, soccer, tennis, and those with the ability to predict future outcomes  相似文献   

6.
A taxonomy of program visualization systems   总被引:1,自引:0,他引:1  
Roman  G.-C. Cox  K.C. 《Computer》1993,26(12):11-24
A taxonomy of program visualization systems that is based on a model of program visualization that maps programs to graphical representations is presented. The taxonomy is illustrated with three program visualization systems representative of research trends: Zeus, Tango, and Pavane  相似文献   

7.
Chee Shin Yeo  Rajkumar Buyya 《Software》2006,36(13):1381-1419
In utility‐driven cluster computing, cluster Resource Management Systems (RMSs) need to know the specific needs of different users in order to allocate resources according to their needs. This in turn is vital to achieve service‐oriented Grid computing that harnesses resources distributed worldwide based on users' objectives. Recently, numerous market‐based RMSs have been proposed to make use of real‐world market concepts and behavior to assign resources to users for various computing platforms. The aim of this paper is to develop a taxonomy that characterizes and classifies how market‐based RMSs can support utility‐driven cluster computing in practice. The taxonomy is then mapped to existing market‐based RMSs designed for both cluster and other computing platforms to survey current research developments and identify outstanding issues. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

8.
Task-based programming models for shared memory—such as Cilk Plus and OpenMP 3—are well established and documented. However, with the increase in parallel, many-core, and heterogeneous systems, a number of research-driven projects have developed more diversified task-based support, employing various programming and runtime features. Unfortunately, despite the fact that dozens of different task-based systems exist today and are actively used for parallel and high-performance computing (HPC), no comprehensive overview or classification of task-based technologies for HPC exists. In this paper, we provide an initial task-focused taxonomy for HPC technologies, which covers both programming interfaces and runtime mechanisms. We demonstrate the usefulness of our taxonomy by classifying state-of-the-art task-based environments in use today.  相似文献   

9.
Quantum computing (QC) is an emerging paradigm with the potential to offer significant computational advantage over conventional classical computing by exploiting quantum-mechanical principles such as entanglement and superposition. It is anticipated that this computational advantage of QC will help to solve many complex and computationally intractable problems in several application domains such as drug design, data science, clean energy, finance, industrial chemical development, secure communications, and quantum chemistry. In recent years, tremendous progress in both quantum hardware development and quantum software/algorithm has brought QC much closer to reality. Indeed, the demonstration of quantum supremacy marks a significant milestone in the Noisy Intermediate Scale Quantum (NISQ) era—the next logical step being the quantum advantage whereby quantum computers solve a real-world problem much more efficiently than classical computing. As the quantum devices are expected to steadily scale up in the next few years, quantum decoherence and qubit interconnectivity are two of the major challenges to achieve quantum advantage in the NISQ era. QC is a highly topical and fast-moving field of research with significant ongoing progress in all facets. A systematic review of the existing literature on QC will be invaluable to understand the state-of-the-art of this emerging field and identify open challenges for the QC community to address in the coming years. This article presents a comprehensive review of QC literature and proposes taxonomy of QC. The proposed taxonomy is used to map various related studies to identify the research gaps. A detailed overview of quantum software tools and technologies, post-quantum cryptography, and quantum computer hardware development captures the current state-of-the-art in the respective areas. The article identifies and highlights various open challenges and promising future directions for research and innovation in QC.  相似文献   

10.
应用仿生学原理设计了一种新型的广播算法——基于动态膜计算系统的广播算法,提出了动态膜计算系统,并给出了解决自组织网络中广播问题的规则集。系统中节点转播信息的优先权通过节点间距离及邻居个数确定,适合不同密度的网络;并根据信息数与门限值的比较,取消部分节点的转播权,从而提高了可达率和转播节省率。仿真测试验证了此系统用于广播是可行的、高效的,为设计无线自组织网络中的广播算法提供了新思路。  相似文献   

11.
Significant resources invested in information system development (ISD) are wasted due to political manoeuvres. Prior research on ISD politics has contributed mainly through theoretical development and case studies. This has enhanced understanding of relevant concepts, political tactics and conditions facilitating politics. However, there is limited understanding of the different processes through which politics unfold. This paper uses 89 ISD projects to develop a taxonomy of political processes in ISD. The taxonomy includes three distinct processes: Tug of War, wherein multiple parties strive to gain project control; Obstacle Race, which involves efforts to resist and pursue the project; and Empire Building, wherein the project is used as an instrument to enhance political or resource bases. The taxonomy is explained using the non‐proponents' view of the project and the balance of power between system's proponents and non‐proponents. We also discuss the emergent taxonomy's implications for how politics can be managed and studied.  相似文献   

12.
The data center network (DCN), which is an important component of data centers, consists of a large number of hosted servers and switches connected with high speed communication links. A DCN enables thc deployment of resources centralization and on-demand access of the information and services of data centers to users. In recent years, the scale of the DCN has constantly increased with the widespread use of cloud-based services and the unprecedented amount of data delivery in/between data centers, whereas the traditional DCN architecture lacks aggregate bandwidth, scalability, and cost effectiveness for coping with the increasing demands of tenants in accessing the services of cloud data centers. Therefore, the design of a novel DCN architecture with the features of scalability, low cost, robustness, and energy conservation is required. This paper reviews the recent research findings and technologies of DCN architectures to identify the issues in the existing DCN architectures for cloud computing. We develop a taxonomy for the classification of the current DCN architectures, and also qualitatively analyze the traditional and contemporary DCN architectures. Moreover, the DCN architectures are compared on the basis of the significant characteristics, such as bandwidth, fault tolerance, scalability, overhead, and deployment cost. Finally, we put forward open research issues in the deployment of scalable, low-cost, robust, and energy-efficient DCN architecture, for data centers in computational clouds.  相似文献   

13.
网络攻击分类是研究网络攻击特点及其防护方法的前提,介绍了网络攻击分类的研究现状,对已有网络攻击分类方法进行了分析比较,提出了一种面向网络可生存性研究的网络攻击分类方法,并用实例说明了该分类方法的分类过程。该方法对网络防护人员进行网络可生存性提升很有帮助,为他们有针对性地研究网络可生存性技术提供了依据。  相似文献   

14.
Virtual network computing   总被引:4,自引:0,他引:4  
VNC is an ultra thin client system based on a simple display protocol that is platform independent. It achieves mobile computing without requiring the user to carry any hardware. VNC provides access to home computing environments from anywhere in the world, on whatever computing infrastructure happens to be available-including, for example, public Web browsing terminals in airports. In addition, VNC allows a single desktop to be accessed from several places simultaneously, thus supporting application sharing in the style of computer supported cooperative work (CSCW). The technology underlying VNC is a simple remote display protocol. It is the simplicity of this protocol that makes VNC so powerful. Unlike other remote display protocols such as the X Window System and Citrix's ICA, the VNC protocol is totally independent of operating system, windowing system, and applications. The VNC system is freely available for download from the ORL Web site at http://www.orl.co.uk/vnc/. We begin the article by summarizing the evolution of VNC from our work on thin client architectures. We then describe the structure of the VNC protocol, and conclude by discussing the ways we use VNC technology now and how it may evolve further as new clients and servers are developed  相似文献   

15.
A taxonomy and current issues in multidatabase systems   总被引:1,自引:0,他引:1  
Bright  M.W. Hurson  A.R. Pakzad  S.H. 《Computer》1992,25(3):50-60
A taxonomy of global information-sharing systems is presented, and the way in which multidatabase systems fit into the spectrum of solutions is discussed. The taxonomy is used as a basis for defining multidatabase systems. Issues associated with multidatabase systems are reviewed. Two major design approaches for multidatabases, global schema systems and multidatabase language systems, are described. Existing multidatabase projects and areas for further research are also discussed  相似文献   

16.
17.
《国际计算机数学杂志》2012,89(12):1455-1465
The computation of the reliability of a computer network is one of the important tasks of evaluating its performance. The idea of minimal paths can be used to determine the network reliability. This paper presents an algorithm for finding the minimal paths of a given network in terms of its links. Then, it presents an algorithm for calculating the reliability of the network in terms of the probabilities of success of the links of its minimal paths. The algorithm is based on a relation that uses the probabilities of the unions of the minimal paths of the network to obtain the network reliability. Also, the paper describes a tool that has been built for calculating the reliability of a given network. The tool has two main phases: the minimal paths generation phase, and the reliability computation phase. The first phase accepts the links of the network and their probabilities, then implements the first proposed algorithm to determine its minimal paths. The second phase implements the second proposed algorithm to calculate the network reliability. The results of using the tool to calculate the reliability of an example network are given.  相似文献   

18.
Basic concepts and taxonomy of dependable and secure computing   总被引:33,自引:0,他引:33  
This paper gives the main definitions relating to dependability, a generic concept including a special case of such attributes as reliability, availability, safety, integrity, maintainability, etc. Security brings in concerns for confidentiality, in addition to availability and integrity. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability and security (faults, errors, failures), their attributes, and the means for their achievement (fault prevention, fault tolerance, fault removal, fault forecasting). The aim is to explicate a set of general concepts, of relevance across a wide range of situations and, therefore, helping communication and cooperation among a number of scientific and technical communities, including ones that are concentrating on particular types of system, of system failures, or of causes of system failures.  相似文献   

19.
The data center network(DCN), which is an important component of data centers, consists of a large number of hosted servers and switches connected with high speed communication links. A DCN enables the deployment of resources centralization and on-demand access of the information and services of data centers to users. In recent years, the scale of the DCN has constantly increased with the widespread use of cloud-based services and the unprecedented amount of data delivery in/between data centers, whereas the traditional DCN architecture lacks aggregate bandwidth, scalability, and cost effectiveness for coping with the increasing demands of tenants in accessing the services of cloud data centers. Therefore, the design of a novel DCN architecture with the features of scalability, low cost, robustness, and energy conservation is required. This paper reviews the recent research findings and technologies of DCN architectures to identify the issues in the existing DCN architectures for cloud computing. We develop a taxonomy for the classification of the current DCN architectures, and also qualitatively analyze the traditional and contemporary DCN architectures. Moreover, the DCN architectures are compared on the basis of the significant characteristics, such as bandwidth, fault tolerance, scalability, overhead, and deployment cost. Finally, we put forward open research issues in the deployment of scalable, low-cost, robust, and energy-efficient DCN architecture, for data centers in computational clouds.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号