首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
交易引擎的结构模型直接影响到交易引擎的引工作机制、计算能力和性能指标。通过分析多模式交易的交易机制,引入虚拟杂多处理机的概念,提出了虚拟混杂多处理机计算环境中基于消息分拣和双级请求队列的多模式交易引擎的软件结构模型。讨论了该模型中虚拟多处理机和分拣器的工作机制以及基于该模型的交易引擎的通信机制。  相似文献   

2.
针对大规模商品电子交易中交易引擎集中式竞价交易的应用要求,结合基于消息传递机制通信中间件的工作特点,提出了面向交易通信中间件系统的一种树状层次结构模型。分析并讨论了基于该模型的通信中间件系统的通信机制和工作特点。  相似文献   

3.
针对分布式网络环境中商品电子交易的安全通信问题,提出了一种面向商品电子交易的安全通信软件的框架结构。讨论了该框架中通信服务器的结构模型、工作机制,以及用户进程与交易引擎之间安全通信通道的构建方法。采用TCP/IP通信技术、Kerberos鉴别技术和IDEA数据加密技术,基于UNIX平台并结合UNIX进程间通信机制,解决了商品电子化交易中存在的通信实时性问题和安全性问题。表明设计实现的安全通信平台满足商品电子化交易的需要。  相似文献   

4.
该文分析了基于组群信誉模型的优缺点,通过改进组群创建机制、引入虚拟节点来降低信誉机制带来的额外网络开销,模型中,组群依据节点的需求偏好进行组建,提高组群的聚集度,将服务和需求集中在组群内部,以期在组群内部能够完成多数交易。同时,将组群虚拟成一个节点,屏蔽交易细节,降低组群信誉计算的复杂度和网络开销。  相似文献   

5.
针对计算负载的时变性和复杂性导致虚拟集群的资源利用率不高的问题,为提高虚拟集群资源的全局利用率,采用弹性资源管理策略来吸收多种计算模式混杂时的资源需求突变。在Docker容器技术的支持下提出一个根据作业需求变化的动态部署模型。该模型根据资源的动态需求变化,实时调整虚拟集群的计算形态,具体包括计算节点的类型及规模。该模型不仅实现用户作业执行环境的动态定制,而且达到错峰计算的目的。仿真实验表明,该模型使得虚拟节点CPU利用率提升5.3%,并且优化了计算作业的执行效率。该动态部署模型适合应用到数据中心或大规模集群中,能够有效提高计算资源的利用率。  相似文献   

6.
一种基于BPEL的网格工作流引擎   总被引:1,自引:0,他引:1  
随着网格应用复杂性的不断增加,需要将多个网格服务编排成为一个网格服务工作流模型,然后由工作流引擎执行对网格服务的调用.为此,我们设计并实现了一个基于BPEL的网格服务工作流引擎BPEL FlowEngine.考虑到网格环境的各种特征,该引擎采用分级处理机制,可以同时调用Web服务、网格服务以及网格调度器.本文将介绍该引擎的结构和具体实现技术,并且与GWES引擎的性能进行比较,最后描述了该引擎在生物信息学计算中的示范应用.  相似文献   

7.
李红波  张寅奇  吴渝  薛亮 《计算机工程》2012,38(18):273-276
现有3D引擎的物理模型不能真实反映车辆制动时的运动状态。为此,提出一种汽车制动稳定性虚拟展示系统。建立汽车动力学模型,包括四轮车辆模型和车轮轮胎模型,利用上层仿真软件对动力学模型进行运算和虚拟场景渲染,给出车轮状态和虚拟仪表的展现方法,并基于3D引擎设计虚拟展现系统。实验结果表明,该系统能同时观测整车及车轮的运行状态,实时再现制动过程中车轮抱死、车身横摆侧滑的行为,其动力学模型能够满足虚拟展现对画面渲染的实时性、连续性要求。  相似文献   

8.
基于多主体的撮合交易模型及算法研究   总被引:3,自引:1,他引:3  
电子商务交易模型是电子商务应用进一步向前发展的核心,该文首先通过对Internet环境中开放出价双重拍卖模型的分析,并结合Multi-Agent分布式协同求解的特点,提出了一种基于多主体的撮合交易模型,对撮合交易机制及其算法做了描述,证明了该模型的经济有效性等特性,并对多主体环境中的撮合交易模型进行了实验仿真和分析。  相似文献   

9.
针对P2P网络中交易的安全性问题,提出了一种基于资源评价的信任管理模型。首先给出评价节点行为信任的好评度的概念,用模糊综合评判的方法计算节点对交易的单次好评度,每次交易后的交易记录表由提供资源的节点的母节点进行管理和存储;当节点选择提供资源的节点时,不仅考虑对目标节点的直接信任度,还考虑此次交易资源的总好评度,在计算直接信任度时考虑了时效性和交易资源的重要程度两个因素,交易资源的总好评度的计算数据来源于该资源的评价节点给出的以往评价;最后引入了基于虚拟货币的激励机制,以有效地提高节点参与的积极性。仿真实验表明,该模型能有效抵制恶意节点的攻击,提高网络交易的成功率。  相似文献   

10.
分析了分布式虚拟战场环境军事仿真系统的特点,设计了基于HLA的炮兵分队射击指挥作战仿真系统的体系结构,给出了各联邦成员划分、功能说明以及SOM的设计。设计了基于RTI与OGRE技术的联邦成员模型,该模型给出了一种通信机制,通信机制的核心在于通过3个监听器调动OGRE渲染引擎与RTI仿真推进引擎间的通信,两个引擎只通过信息层交互从而减少了两者的关联,保障了人机交互和RTI通信同时进行的效率要求。实验结果验证了仿真系统设计的可靠性和效率,为更大规模的炮兵体系对抗仿真系统开发奠定了基础。  相似文献   

11.
In this paper we introduce a biologically inspired distributed computing model called networks of evolutionary processors with parallel string rewriting rules (NEPPS), which is a variation of the hybrid networks of evolutionary processors introduced by Martin-Vide et al. Such a network contains simple processors that are located in the nodes of a virtual graph. Each processor has strings (each string having multiple copies) and string rewriting rules. The rules are applied parallely on the strings. After the strings have been rewritten, they are communicated among the processors through filters. We show that we can theoretically break the DES (data encryption standard), which is the most widely used cryptosystem, using NEPPS. We prove that, given an arbitrary <plain-text, cipher-text> pair, one can recover the DES key in a constant number of steps.  相似文献   

12.
基于Spark的大数据混合计算模型   总被引:2,自引:0,他引:2  
现实世界大数据应用复杂多样,可能会同时包含不同特征的数据和计算,在这种情况下单一的计算模式多半难以满足整个应用的需求,因此需要考虑不同计算模式的混搭使用。混合计算模式之集大成者当属UCBerkeley AMPLab的Spark系统,其涵盖了几乎所有典型的大数据计算模式,包括迭代计算、批处理计算、内存计算、流式计算(Spark Streaming)、数据查询分析计算(Shark)、以及图计算(GraphX)。 Spark提供了一个强大的内存计算引擎,实现了优异的计算性能,同时还保持与Hadoop平台的兼容性。因此,随着系统的不断稳定和成熟, Spark有望成为与Hadoop共存的新一代大数据处理系统和平台。本文详细研究和分析了Spark生态系统,建立了基于Spark平台的混合计算模型架构,并说明通过spark生态系统可以有效地满足大数据混合计算模式的应用。  相似文献   

13.
Current multimedia extensions provide a mechanism for general-purpose processors to meet the growing performance demand of multimedia applications. However, the computing performance of these extensions is often limited for the design conceptions of the single data stream. This paper presents an architecture called “multi-streaming SIMD” that enables current multimedia extensions to simultaneously manipulate multiple data streams. To efficiently and flexibly realize the proposed architecture, an operation cell is designed by fusing the logic gates and the storage cells together. Multiple operation cells then are connected to compose a register file with the ability of performing SIMD operations called “Multimedia Operation Storage Unit (MOSU)”. Further, many MOSUs are used to compose a multi-streaming SIMD computing engine that can simultaneously manipulate multiple data streams and exploit the subword parallelisms of the elements in each data stream. This paper also designs three instruction modes (global, coupling, and isolated modes) for programmers to dynamically configure the multi-streaming SIMD computing engine at the instruction level to manipulate different amounts of data streams. Simulation results show that when the multi-streaming SIMD architecture has four 4-register MOSUs, it provides a factor of 3.3×–5.5× performance enhancement for traditional MMX extensions on 12 multimedia kernels.  相似文献   

14.
多核平台下XEN虚拟机动态调度算法研究   总被引:1,自引:0,他引:1  
虚拟机调度算法对并行任务的执行效率考虑不够充分。现代处理器平台具备了多个可用的计算核心,使多个虚拟机并发执行成为了现实。针对多核平台下的并行虚拟机调度优化问题,提出一种基于任务特征虚拟机CON-Credit调度算法。该算法在调度并行任务时,使用动态方式对计算机核心进行分配,采用传统的虚拟机调度算法为执行普通任务的虚拟机进行分配;采用定制的同步算法给执行并行任务的虚拟机分进分配。相关实验显示,CON-Credit调度算法能显著提高并行任务的执行效率。  相似文献   

15.
Providing temporal isolation between critical activities has been an important design criterion in real-time open systems, which can be achieved using resource reservation techniques. As an abstraction of reservation servers, virtual processor is often used to represent a portion of computing power available on a physical platform while hiding the implementation details. In this paper, we present a general framework of partitioning an application comprised of hard real-time tasks with precedence constraints onto multiple virtual processors in consideration of communication latencies between tasks. A novel method is proposed for assigning deadlines and activation times to tasks such that tasks partitioned onto different virtual processors can be analyzed separately using well-established theories for uniprocessor. Extensive simulations have been performed and the results have shown that, compared to existing algorithms, the proposed method achieves better performance in terms of minimizing both total bandwidth and the maximum individual bandwidth.  相似文献   

16.
【目的】随着云计算、物联网以及人工智能等新型高通量应用的迅速兴起,高性能计算的主要应用从传统的科学与工程计算为主逐步演变为以新兴数据处理为核心,这给传统处理器带来了巨大的挑战,而高通量众核处理器作为面向此类应用的新型处理器结构成为重要的研究方向。【方法】针对上述问题,本文分析了高通量典型应用特征,从数据处理端、传输端以及存储端三个核心环节开展了高通量众核处理器关键技术设计探讨,包括实时任务动态调度、高密度片上网络设计、片上存储层次优化等。【结果】实验结果显示上述机制可以有效确保任务的服务质量,提升网络的数据吞吐率,以及简化片上存储层次。【结论】随着万物互联时代对高并发强实时处理的迫切需求,高通量众核处理器有望成为未来数据中心的核心处理引擎。  相似文献   

17.
基于ESCA系统的层次化显式访存机制研究   总被引:1,自引:0,他引:1       下载免费PDF全文
针对高性能混合计算系统中的存储墙问题,在分析其计算模式特点及传统访存机制局限性的基础上,提出适用于混合计算系统的层次化显式存储访问机制,并基于ESCA多核处理器系统进行实现和评测。实验结果显示,针对核心应用程序DGEMM,延迟隐藏能够占据整体运行时间的56%,并获得1.5倍的加速比,能弥补计算与存储访问间的速度差异,提高系统计算效率。  相似文献   

18.
There are substantial benefits to be gained from building computing systems from a number of processors working in parallel. One of the frequently-stated advantages of parallel and distributed systems is that they may be scaled to the needs of the user. This paper discusses some of the problems associated with designing a general-purpose operating system for a scalable parallel computing engine and then describes the solutions adopted in our experimental parallel operating system. We explain why a parallel computing engine composed of a collection of processors communicating through point-to-point links provides a suitable vehicle in which to realize the advantages of scaling. We then introduce a parallel-processing abstraction which can be used as the basis of an operating system for such a computing engine. We consider how this abstraction can be implemented and retain the ability to scale. As a concrete example of the ideas presented here we describe our own experimental scalable parallel operating-system project, concentrating on the Wisdom nucleus and the Sage file system. Finally, after introducing related work, we describe some of the lessons learnt from our own project.  相似文献   

19.
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.  相似文献   

20.
Hybrid CPU/GPU cluster recently has drawn lots of attention from high performance computing because of excellent execution performance and energy efficiency. Many supercomputing sites in the newest TOP 500 and Green 500 are built by hybrid CPU/GPU clusters instead of CPU clusters. However, the programming complexity of hybrid CPU/GPU clusters is so high such that most of users usually hesitate to move toward to this new cluster computing platform. To resolve this problem, we propose a distributed PTX virtual machine called BigGPU on heterogeneous clusters in this paper. As named, this virtual machine physically is a distributed system which is aimed at parallel re-compiling and executing the PTX codes by aggregating CPUs and GPUs available in a computational cluster. With the support of this virtual machine, users can regard a hybrid CPU/GPU as a single large-scale GPU. Consequently, they can develop applications by using only CUDA without combining MPI and multithreading APIs while can simultaneously use distributed CPUs and GPUs for resolving the same problem. Moreover, they need not handle the problem of load balance among heterogeneous processors and the constraints of device memory and thread configuration existing in physical GPUs because BigGPU supports large-scale virtual device memory space and thread configuration. On the other hand, we have evaluated the execution performance of BigGPU in this paper. Our experimental results have shown that BigGPU indeed can effectively exploit the computational power of CPUs and GPUs for enhancing the execution performance of user's CUDA programs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号