首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The task scheduling in heterogeneous distributed computing systems plays a crucial role in reducing the makespan and maximizing resource utilization. The diverse nature of the devices in heterogeneous distributed computing systems intensifies the complexity of scheduling the tasks. To overcome this problem, a new list-based static task scheduling algorithm namely Deadline-Aware-Longest-Path-of-all-Predecessors (DA-LPP) is being proposed in this article. In the prioritization phase of the DA-LPP algorithm, the path length of the current task from all its predecessors at each level is computed and among them, the longest path length value is assigned as the rank of the task. This strategy emphasizes the tasks in the critical path. This well-optimized prioritization phase leads to an observable minimization in the makespan of the applications. In the processor selection phase, the DA-LPP algorithm implements the improved insertion-based policy which effectively utilizes the unoccupied leftover free time slots of the processors which improve resource utilization, further least computation cost allocation approach is followed to minimize the overall computation cost of the processors and parental prioritization policy is incorporated to further reduce the scheduling length. To demonstrate the robustness of the proposed algorithm, a synthetic graph generator is used in this experiment to generate a huge variety of graphs. Apart from the synthetic graphs, real-world application graphs like Montage, LIGO, Cybershake, and Epigenomic are also considered to grade the performance of the DA-LPP algorithm. Experimental results of the DA-LPP algorithm show improvement in performance in terms of scheduling length ratio, makespan reduction rate , and resource reduction rate when compared with other algorithms like DQWS, DUCO, DCO and EPRD. The results reveal that for 1000 task set with deadline equals to two times of the critical path, the scheduling length ratio of the DA-LPP algorithm is better than DQWS by 35%, DUCO by 23%, DCO by 26 %, and EPRD by 17%.  相似文献   

2.
A model for distributed sorting on a local area network (LAN) is presented. This model, contrary to the conventional model, takes into account both local processing time and communication time. This model is intended to provide a framework within which the performances of various distributed sorting algorithms are analyzed and implemented on Ethernet-connected Sun workstations. The empirical results by and large agree with the predictions derivable from the model. They show that local processing, particularly sorting of local subfiles, dominates the whole process, as far as response time is concerned. All algorithms examined have similar asymptotic behavior for large files. For medium-sized files, the degree of communication parallelism has a great impact on algorithm performance  相似文献   

3.
Heterogeneous computing systems are promising computing platforms, since single parallel architecture based systems may not be sufficient to exploit the available parallelism with the running applications. In some cases, heterogeneous distributed computing (HDC) systems can achieve higher performance with lower cost than single-machine supersystems. However, in HDC systems, processors and networks are not failure free and any kind of failure may be critical to the running applications. One way of dealing with such failures is to employ a reliable scheduling algorithm. Unfortunately, most existing scheduling algorithms for precedence constrained tasks in HDC systems do not adequately consider reliability requirements of inter-dependent tasks. In this paper, we design a reliability-driven scheduling architecture that can effectively measure system reliability, based on an optimal reliability communication path search algorithm, and then we introduce reliability priority rank (RRank) to estimate the task’s priority by considering reliability overheads. Furthermore, based on directed acyclic graph (DAG) we propose a reliability-aware scheduling algorithm for precedence constrained tasks, which can achieve high quality of reliability for applications. The comparison studies, based on both randomly generated graphs and the graphs of some real applications, show that our scheduling algorithm outperforms the existing scheduling algorithms in terms of makespan, scheduling length ratio, and reliability. At the same time, the improvement gained by our algorithm increases as the data communication among tasks increases.  相似文献   

4.
Object orientation in heterogeneous distributed computing systems   总被引:1,自引:0,他引:1  
Nicol  J.R. Wilkes  C.T. Manola  F.A. 《Computer》1993,26(6):57-67
The basic properties of object orientation and their application to heterogeneous, autonomous, and distributed system to increase interoperability ar examined. It is argued that object-oriented distributed computing is a natural step forward from client-server systems. To support this claim, the differing levels of object-oriented support already found in commercially available distributed systems-in particular, the distributed computing environment of the open software foundation and the Cronus system of Bolt Beranek, Newman (BBN)-are discussed. Emerging object-oriented systems and standards are described, focusing on the convergence toward a least-common-denominator approach to object-oriented distributed computing embodied by the object management group's common object request broker architecture  相似文献   

5.
A distributed control algorithm, called MEAL, is presented for achieving mutual exclusion in a distributed computing environment. It requires only (N + 2) messages per critical section entry, in the no failures case; N being the number of nodes in the distributed system. Few assertions are proved to verify the correct functioning of MEAL. Possible modification to make it resilient, in case of node failures, are also suggested.  相似文献   

6.
This paper presents the design and implementation of a message-based distributed operating system kernel NDOS.The main purpose of the kernel is to support a distributed data processing system and a distributed DBMS.It uses the abstraction of communication between processes as basic mechanism.In NDOS,services and facilities such as message passing and process synchronization,which are related to IPC and may cause the change of the state of a process,are integrated into a single concept,an event,The initial verdion of NDOS kernel has been implemented on a full heterogeneous environment of different machines.LANs,and OSs wih the original high-layered sys,ems and applications are still provided.  相似文献   

7.
In order to overcome the disadvantages of traditional centralized video monitoring system or distributed systems based on wired network, we propose a framework for distributed video surveillance in heterogeneous environment. Video flows are compressed with the scalable video encoding standard MPEG-4 and transmitted over lnternet or wireless network. Video surveillance can be performed wherever there is Internet or mobile telephone signal. The feasibility of this framework has been demonstrated with a prototype implementation. The system is cheaper and easier to achieve with simple equipments, so it can be widely used in practice.  相似文献   

8.
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging.In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal.We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.  相似文献   

9.
Nowadays Sensor Networks and Ad Hoc Networks are widely used communication facilities, mainly because of their many application settings. Again, though above types of network are paradigms of communication widespread and well-established in the state of the art, however, they turn out to be among the most important concepts underlying the modern and increasingly expanding User-Centric Networks, which can be used to build dynamic middleware services for heterogeneous distributed computing. In this way, can be addressed the strong dynamic behavior of user communities and of resource collections they use.In this paper we focus our attention on key predistribution for secure communications in those types of networks. In particular, we first analyze some schemes proposed in the literature for enabling a group of two or more nodes to compute a common key, which can be used later on to encrypt or authenticate exchanged messages. The schemes we have chosen are well representative of different design strategies proposed in the state of the art. Moreover, in order to find out under which conditions and in which settings a scheme is more suitable than others, we provide an evaluation and a performance comparison of those schemes. Furthermore, we look at the problem of identifying optimal values for the parameters of such schemes, with respect to a certain desirable security degree and reasonable security assumptions. Finally, we extend one of those schemes, showing both analytically and through experiments, the improvement the new scheme provides in terms of security compared to the basic one.  相似文献   

10.
Today, almost everyone is connected to the Internet and uses different Cloud solutions to store, deliver and process data. Cloud computing assembles large networks of virtualized services such as hardware and software resources. The new era in which ICT penetrated almost all domains (healthcare, aged-care, social assistance, surveillance, education, etc.) creates the need of new multimedia content-driven applications. These applications generate huge amount of data, require gathering, processing and then aggregation in a fault-tolerant, reliable and secure heterogeneous distributed system created by a mixture of Cloud systems (public/private), mobile devices networks, desktop-based clusters, etc. In this context dynamic resource provisioning for Big Data application scheduling became a challenge in modern systems. We proposed a resource-aware hybrid scheduling algorithm for different types of application: batch jobs and workflows. The proposed algorithm considers hierarchical clustering of the available resources into groups in the allocation phase. Task execution is performed in two phases: in the first, tasks are assigned to groups of resources and in the second phase, a classical scheduling algorithm is used for each group of resources. The proposed algorithm is suitable for Heterogeneous Distributed Computing, especially for modern High-Performance Computing (HPC) systems in which applications are modeled with various requirements (both IO and computational intensive), with accent on data from multimedia applications. We evaluate their performance in a realistic setting of CloudSim tool with respect to load-balancing, cost savings, dependency assurance for workflows and computational efficiency, and investigate the computing methods of these performance metrics at runtime.  相似文献   

11.
In China, fast city rebuilding poses the challenge of frequent refresh cycle of urban traffic noise mapping. Computational complexity and lack of resources are the primary bottleneck in traffic noise mapping. In this paper, we present a flexible distributed heterogeneous computing method based on GPU-CPU cooperation, which reduces the overhead, improves the efficiency of parallel computing and consistently generates good quality results for traffic noise mapping. A genetic algorithm based large-scale task partition algorithm is employed to solve load balancing problem in distributed noise mapping calculation. The methodology is evaluated by an example, whose results show that the proposed task partition method can significantly improve running efficiency. Parallel efficiency increases from 54% to 78%. In addition, test speed is further improved by 21% with the GPU-CPU collaborative computing, even with only low-end type GPUs.  相似文献   

12.
基于Java技术的分布式计算环境研究   总被引:7,自引:2,他引:5  
李治  任波  王乘 《计算机工程与设计》2004,25(6):912-914,920
J2EE具有强大的分布式处理能力,其Rmi、Jini和JavaSpaces技术为实现异构的分布式计算环境提供了坚实的技术基础。JavaSpaces是建立在Jini之上的,它可以作为一种共享分布式通信和对象存储的机制。针对分布式计算环境的任务分解、并行同步与通信控制等特点,提出了基于Java技术的分布式计算环境。应用实例表明,该环境能够在异构的复杂系统中有效地进行分布式计算。  相似文献   

13.
14.
Collaborative computing is a new communication technology that makes it possible to extend model formulation, management, and analysis into a geographically distributed group environment. The forms of communication may vary from asynchronous hypermedia teamwork via the Internet to multipoint desktop video conferencing. The latter presents the maximal potential for integrating shared quantitative model analysis with real-time/asynchronous geographically distributed electronic meetings. Infeasibility diagnosis and reasoning on conflict resolution are known to be important parts of an evolving approach to linear programming model analysis. In the new distributed environment, several specific decision support issues related to infeasibility analysis emerge. One is the learning mechanism for capturing group knowledge on infeasibility resolution that is generated during the collaborative sessions of model analysis. The other new decision support component is a coordination protocol that is capable of linking individual activities and software transactions for the support of group reasoning on infeasibility analysis. This paper addresses these issues on the basis of committee models for infeasibility resolution and the neural network approach. Examples of modeling cases are based on experiments using a multiple criteria model for support of resource allocation in a distributed electronic meeting, and the case of model-based diagnosis that the group is involved in.  相似文献   

15.
This paper describes the design, implementation and testing of a set of software modules that are used for remote database access in a distributed computing environment. The goal of our research and development is to implement a client server model using Structured Query Language (SQL) functions in the Open Software Foundation's (OSF) distributed computing environment (DCE). This design is compared with another which simply uses the sockets application programming interface (API), running over the transmission control protocol/internet protocol (TCP/IP), and is implemented in subroutines that act similar to the remote procedure call (RPC) generated stub code. The prototypes for the remote SQL access project are implemented using an IBM RISC System/6000 (the client) running the AIX operating system and an IBM AS/400 (the server) running the OS/400 operating system. We selected the AS/400 for its database abilities, and the RISC System/6000 since the DCE software is available for it.  相似文献   

16.
详细分析了当前分布异构数据库访问技术的研究现状和发展趋势,结合Web Services的优势构造了一个基于Web Services的分布异构数据库访问系统,并阐述了系统的实现过程。该系统能够有效地支持分布式数据查询,数据源透明并且支持跨平台检索。  相似文献   

17.
The Internet has become the global infrastructure supporting information acquisition and retrieval from many heterogeneous data sources containing high-speed text and rich multimedia images, audio, and video. AgentRAIDER is an ongoing research project at Texas Tech University designed to develop a comprehensive architecture for an intelligent information retrieval system with distributed heterogeneous data sources. The system is designed to support intelligent retrieval and integration of information from the Internet. Current systems of this nature focus only on specific aspects of the distributed heterogeneous problem such as database queries or information filtering. Consequently, these concepts and others have never been successfully integrated into a unified, cohesive architecture. This paper discusses the design and implementation of the AgentRAIDER system and identifies areas for applications of the system in various domains.  相似文献   

18.
Computer models are used in landscape ecology to simulate the effects of human land-use decisions on the environment. Many socioeconomic as well as ecological factors must be considered, requiring the integration of spatially explicit multidisciplinary data. The Land-Use Change Analysis System or LUCAS has been developed to study the effects of land-use on landscape structure in such areas as the Little Tennessee river basin in western North Carolina and the Olympic Peninsula of Washington state. These effects include land-cover change and species habitat suitability. The map layers used by LUCAS are derived from remotely sensed images, census and ownership maps, topological maps and output from econometric models. A public-domain geographic information system (GIS) is used to store, display and analyze these map layers. A parallel version of LUCAS (pLUCAS) was developed using the parallel virtual machine (PVM) on a network of workstations giving a speedup factor of 10.77 with 20 nodes. A parallel model is necessary for simulations on larger domains or for maps with a much higher resolution.  相似文献   

19.
With the advent of new generation of mobile access devices such as smartphone and tablet PC, there is coming a need for ubiquitous collaboration which allows people to access information systems with their disparate access devices and to communicate with others in anytime and in anywhere. As the number of collaborators with a large number of disparate access devices increases in ubiquitous collaboration environment, the difficulties for protecting secured resources from unauthorized users as well as unsecured access devices will increase since the resources can be compromised by inadequately secured human and devices. Therfore, authentication mechanism for access of legitimate participants is essential in ubiquitous collaboration environment. In this paper we present an efficient authentication mechanism in ubiquitous collaboration environment. We show that proposed scheme is secure through security analysis and is efficient through the experimental results obtained from the practical evaluation of the scheme in ubiquitous collaboration environment.  相似文献   

20.
In the present work, the parallelization of the solution of a system of linear equations, in the framework of finite element computational analyses, is dealt with. As the substructuring method is used, the basic idea refers to a way of decomposing the considered spatial domain, discretized by the finite elements, into a finite set of non-overlapping subdomains, each assigned to an individual processor and computationally analysed in parallel. Considering the fact that Personal Computers and Work Stations are still the most frequently used computers, a parallel computational platform can be built by connecting the available computers into a computer network. The incorporated computers being usually of different computational power and memory size, the efficiency of parallel computations on such a heterogeneous distributed system depends mainly on proper load balance. To cope the balance problem, an algorithm for the efficient load balance for structured and free 2D quadrilateral finite element meshes based on the rearrangement of elements among respective sub-domains, has been developed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号