首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
戈登奖(Gordon Bell Prize)是高性能计算应用领域的最高学术奖项。与TOP500重视衡量高性能计算机系统性能相比,该奖项更关注用于解决重要科学问题的高性能计算技术创新,是国际上公认的高性能计算应用技术发展水平的重要标杆。本文综合分析近年来戈登奖的获奖研究成果,尤其是最高性能奖和特别奖的研究特点及科学意义。在此基础上,总结规律,并就如何推进高性能计算应用研究,给出了一些思考,以期为我国从事超级计算应用研究的同仁提供参考。  相似文献   

2.
Java在科学计算方面的并行处理   总被引:6,自引:0,他引:6  
讨论了Java和Web技术在科学和工程计算中的作用。应用三种大计算量问题,调查了Java作为高性能并行分布计算语言的可能性。Java将能够很好地成为科学和工程领域的主导语言。  相似文献   

3.
浅析高性能计算应用的需求与发展   总被引:3,自引:0,他引:3  
高性能计算应用在高性能计算技术的支持下为科技创新做出了巨大贡献,并且和高性能计算技术在相辅相成中不断发展.自2004年以来,中国科学院计算机网络信息中心超级计算中心针对中国科学院在“十一五”期间的高性能计算需求在全院范围内开展了多次调研活动,对中国科学院在“十一五”期间高性能计算的整体需求及各应用领域需求的分布情况有了比较全面的了解,其调研结果对“十一五”中国科学院高性能计算环境建设和高性能计算应用的发展具有良好的借鉴作用.首先介绍了国内外高性能计算应用的发展现状,并结合中国科学院高性能计算环境建设和高性能计算应用的发展情况,分析了“十一五”中国科学院高性能计算的应用需求,最后对我国高性能计算应用的发展前景进行了展望.  相似文献   

4.
高性能计算正在成为继理论研究和实验科学之后的第三种科研方法。相对于美国、日本等发达国家,我国的高等院校尝试高性能计算技术起步较晚,高性能应用也尚属初级。那么,利用高性能计算进行科研的原理到底是什么?我国高性能计算的实际应用与国际同行有哪些差距?哪种建设模式更利于将高性能计算中心转换成高校的实际生产力?我国高校提高高性能计算应用水平的发展瓶颈是什么?下面就通过湘潭大学高性能计算平台的建设故事来帮助大家解开这些谜底。  相似文献   

5.
面向21世纪的广域高性能元计算技术   总被引:3,自引:1,他引:2  
一、引言高性能并行与分布计算是关系到国家战略利益的核心高技术,也是国民经济发展的重要基础,是一个重大的基础研究问题。由于计算机微处理器的速度越来越高,网络通信技术的迅猛发展,网络规模的扩大和速度的提高,特别是Internet技术的兴起和广泛应用,网络已经交叉纵横整个世界。与此同时,人类的应用需求朝着高性能、多样性、多功能发展,许多大规模科学计  相似文献   

6.
孙俊 《福建电脑》2007,(2):210-210,184
本文根据目前高性能计算在国内外的发展现状以及未来的发展趋势,结合高性能计算的实际的应用领域,系统的分析了福建省目前对高性能计算的需求以及未来高性能计算的发展走向.本文旨在说明在我省加快高性能计算的应用与研究是符合世界进步的明智之举.  相似文献   

7.
1 引言目前,随着性能的不断提高,超级计算机的应用已从科学计算领域进一步拓展到事务处理领域中。1991年美国在高性能计算与通信(HPCC)计划中提出了一系列“具有广泛经济和科学影响的基础性”重大挑战问题。这些挑战性问题促进了计算科学与计算工程交叉学科的发展,这就需要将计算机系统和计算求解技术  相似文献   

8.
李会斌  张武  李睿阳 《计算机工程与设计》2007,28(5):1075-1077,1095
随着网格计算技术的发展和Web Services技术的出现,使得整合各种计算资源解决具有重大挑战性的科学和工程计算问题成为可能.在对已有的应用于高性能计算的网格计算系统Netsolve进行研究的基础上,结合Web Services技术提出了一种用于高性能计算的网格系统,并且在初步实现的基础上,探讨了系统的优缺点.  相似文献   

9.
高性能计算的应用随着技术的不断提升而迅猛地发展。市场在发展的同时,厂商与用户要经历不断成熟和理性的过程。不找说辞,厂商和用户各自都尽快成熟起来,高性能计算才能真正的发展起来。此次碰撞邀请了联想高性能服务器事业部总经理/中科院计算所研究员祝明发和复旦大学化学系李振华副教授、同济大学海洋与地球科学学院王华忠教授以及上海交通大学物理系孙教授,就此话題共同讨论。  相似文献   

10.
随着高性能计算应用需求的迅猛发展,在单一的计算机上或单一的计算机群系统上已不能解决一些超大规模应用问题,这就需要将地理上分布、系统异构的各种高性能计算机系统,通过高速互连网络连接并集成起来,网格计算技术就是解决高性能计算应用的技术。文章研究了当前国内外网格技术研究的情况、网格技术的广泛应用及未来网格技术的发展。  相似文献   

11.
由于科学研究与商业应用等对高性能计算的需求与日俱增,高性能计算的性能和系统规模得到迅速发展。但是,急剧增长的功耗严重限制了高性能计算系统的设计和使用,使得低功耗技术成为高性能计算领域的关键技术。作为整个系统的核心组件,作业调度系统立足有限的系统资源,对用户提交的应用进行作业-资源分配,其能效性对于整个高性能计算系统的能耗控制与调节起到至关重要的作用。首先介绍主要的能量效率技术和常用的作业调度策略,然后对当前高性能计算作业调度能效性进行分析,并讨论了其面临的挑战及未来发展方向。  相似文献   

12.
随着虚拟化技术和云计算技术的发展,越来越多的高性能计算应用运行在云计算资源上.在基于虚拟化技术的高性能计算云系统中,高性能计算应用运行在多个虚拟机之中,这些虚拟机可能放置在不同的物理节点上.若多个通信密集型作业的虚拟机放置在相同的物理节点上,虚拟机之间将竞争物理节点的网络Ⅰ/O资源,如果虚拟机对网络Ⅰ/O资源的需求超过物理节点的网络Ⅰ/O带宽上限,将严重影响通信密集型作业的计算性能.针对虚拟机对网络Ⅰ/O资源的竞争问题,提出一种基于网络Ⅰ/O负载均衡的虚拟机放置算法NLPA,该算法采用网络Ⅰ/O负载均衡策略来减少虚拟机对网络Ⅰ/O资源的竞争.实验表明,与贪心算法进行比较,对于同样的高性能计算作业测试集,NLPA算法在完成作业的计算时间、系统中的网络Ⅰ/O负载吞吐率、网络Ⅰ/O负载均衡3个方面均有更好的表现.  相似文献   

13.
In the 1990s the Message Passing Interface Forum defined MPI bindings for Fortran, C, and C++. With the success of MPI these relatively conservative languages have continued to dominate in the parallel computing community. There are compelling arguments in favour of more modern languages like Java. These include portability, better runtime error checking, modularity, and multi‐threading. But these arguments have not converted many HPC programmers, perhaps due to the scarcity of full‐scale scientific Java codes, and the lack of evidence for performance competitive with C or Fortran. This paper tries to redress this situation by porting two scientific applications to Java. Both of these applications are parallelized using our thread‐safe Java messaging system—MPJ Express. The first application is the Gadget‐2 code, which is a massively parallel structure formation code for cosmological simulations. The second application uses the finite‐domain time‐difference method for simulations in the area of computational electromagnetics. We evaluate and compare the performance of the Java and C versions of these two scientific applications, and demonstrate that the Java codes can achieve performance comparable with legacy applications written in conventional HPC languages. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
The field of scientific workflow management systems has grown significantly as applications start using them successfully. In 2007, several active researchers in scientific workflow developments presented the challenges for the state of the art in workflow technologies at that time. Many issues have been addressed, but one of them named ‘dynamic workflows and user steering’ remains with many open problems despite the contributions presented in the recent years. This article surveys the early and current efforts in this topic and proposes a taxonomy to identify the main concepts related to addressing issues in dynamic steering of high performance computing (HPC) in scientific workflows. The main concepts are related to putting the human-in-the-loop of the workflow lifecycle, involving user support in real-time monitoring, notification, analysis and interference by adapting the workflow execution at runtime.  相似文献   

15.
Today, various Science Gateways created in close collaboration with scientific communities provide access to remote and distributed HPC, Grid and Cloud computing resources and large-scale storage facilities. However, as we have observed there are still many entry barriers for new users and various limitations for active scientists. In this paper we present our latest achievements and software solutions that significantly simplify the use of large scale and distributed computing. We describe several Science Gateways that have been successfully created with the help of our application tools and the QCG (Quality in Cloud and Grid) middleware, in particular Vine Toolkit, QCG-Portal and QCG-Now, and make the use of HPC, Grid and Cloud more straightforward and transparent. Additionally, we share the best practices and lessons learned after creating jointly with user communities many domain-specific Science Gateways, e.g. dedicated for physicists, medical scientists, chemists, engineers and external communities performing multi-scale simulations. As our deployed software solutions have reached recently a critical mass of active users in the PLGrid e-infrastructure in Poland, we also discuss in this paper how changing technologies, visual design and user experience could impact the way we should re-design Science Getaways or even develop new attractive tools, e.g. desktop or mobile-based applications in the future. Finally, we present information and statistics regarding the behaviour of users to help readers understand how new capabilities and functionalities may influence the growth of user interest in Science Gateways and HPC technologies.  相似文献   

16.
The energy consumption of High Performance Computing (HPC) systems, which are the key technology for many modern computation-intensive applications, is rapidly increasing in parallel with their performance improvements. This increase leads HPC data centers to focus on three major challenges: the reduction of overall environmental impacts, which is driven by policy makers; the reduction of operating costs, which are increasing due to rising system density and electrical energy costs; and the 20 MW power consumption boundary for Exascale computing systems, which represent the next thousandfold increase in computing capability beyond the currently existing petascale systems. Energy efficiency improvements will play a major part in addressing these challenges.This paper presents a toolset, called Power Data Aggregation Monitor (PowerDAM), which collects and evaluates data from all aspects of the HPC data center (e.g. environmental information, site infrastructure, information technology systems, resource management systems, and applications). The aim of PowerDAM is not to improve the HPC data center's energy efficiency, but is to collect energy relevant data for analysis without which energy efficiency improvements would be non-trivial and incomplete. Thus, PowerDAM represents a first step towards a truly unified energy efficiency evaluation toolset needed for improving the overall energy efficiency of HPC data centers.  相似文献   

17.
Abstract

As an alternative to traditional computing architecture, cloud computing now is rapidly growing. However, it is based on models like cluster computing in general. Now supercomputers are getting more and more powerful, helping scientists have more indepth understanding of the world. At the same time, clusters of commodity servers have been mainstream in the IT industry, powering not only large Internet services but also a growing number of data-intensive scientific applications, such as MPI based deep learning applications. In order to reduce the energy cost, more and more efforts are made to improve the energy consumption of HPC systems. Because I/O accesses account for a large portion of the execution time for data intensive applications, it is critical to design energy-aware parallel I/O functions for addressing challenges related to HPC energy efficiency. As the de facto standard for designing parallel applications in cluster environment, the Message Passing Interface has been widely used in high performance computing, therefore, getting the energy consumption information of MPI applications is critical for improving the energy efficiency of HPC systems. In this work we first present our energy measurement tool, a software framework that eases the energy collection in cluster environment. And then we present an approach which can optimise the parallel I/O operation’s energy efficiency. The energy scheduling algorithm is evaluated in a cluster.  相似文献   

18.
19.
Cloud computing offers new computing paradigms, capacity and flexible solutions to high performance computing (HPC) applications. For example, Hardware as a Service (HaaS) allows users to provide a large number of virtual machines (VMs) for computation-intensive applications using the HaaS model. Due to the large number of VMs and electronic components in HPC system in the cloud, any fault during the execution would result in re-running the applications, which will cost time, money and energy. In this paper we presented a proactive fault tolerance (FT) approach to HPC systems in the cloud to reduce the wall-clock execution time and dollar cost in the presence of faults. We also developed a generic FT algorithm for HPC systems in the cloud. Our algorithm does not rely on a spare node prior to prediction of a failure. We also developed a cost model for executing computation-intensive applications on HPC systems in the cloud. We analysed the dollar cost of provisioning spare nodes and checkpointing FT to assess the value of our approach. Our experimental results obtained from a real cloud execution environment show that the wall-clock execution time and cost of running computation-intensive applications in cloud can be reduced by as much as 30%. The frequency of checkpointing of computation-intensive applications can be reduced up to 50% with our FT approach for HPC in the cloud compared with current FT approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号