首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A distributed virtual environment (DVE) is a shared virtual environment where multiple users at their workstations interact with each other over a network. Some of these systems may support a large number of users, for example, multiplayer online games. An important issue is how well the system scales as the number of users increases. In terms of scalability, a promising system architecture is a two-level hierarchical architecture. At the lower level, multiple servers are deployed; each server interacts with its assigned users. At the higher level, the servers ensure that their copies of the virtual environment are as consistent as possible. Although the two-level architecture is believed to have good properties with respect to scalability, not much is known about its performance characteristics. In this paper, we develop a performance model for the two-level architecture and obtain analytic results on the workload experienced by each server. Our results provide valuable insights into the scalability of the architecture. We also investigate the issue of consistency and develop a novel technique to achieve weak consistency among copies of the virtual environment at the various servers. Simulation results on the consistency/scalability trade-off are presented.  相似文献   

2.
Software-as-a-Service (SaaS) is a new software delivery model with Multi-Tenancy Architecture (MTA). An SaaS system is often mission critical as it often supports a large number of tenants, and each tenant supports a large number of users. This paper proposes a scalable index management algorithm based on B+ tree but with automated redundancy and recovery management as the tree maintains two copies of data. The redundancy and recovery management is done at the SaaS level as data are duplicated with tenant information rather than at the PaaS level where data are duplicated in chunks. Using this approach, an SaaS system can scale out or in based on the dynamic workload. This paper also uses tenant similarity measures to cluster tenants in a multi-level scalability architecture where similar tenants can be grouped together for effcient processing. The scalability mechanism also includes an automated migration strategies to enhance the SaaS performance. The proposed scheme with automated recovery and scalability has been simulated, the results show that the proposed algorithm can scale well with increasing workloads.  相似文献   

3.
As the diversity in end-user devices and networks grows, it becomes important to be able to efficiently and adaptively serve media content to different types of users. A key question surrounding adaptive media is how to do Rate-Distortion optimized scheduling. Typically, distortion is measured with a single distortion measure, such as the Mean-Squared Error compared to the original high resolution image or video sequence. Due to the growing diversity of users with varying capabilities such as different display sizes and resolutions, we introduce Multiple Distortion Measures (MDM) to account for a diverse range of users and target devices. MDM gives a clear framework with which to evaluate the performance of media systems which serve a variety of users. Scalable coders, such as JPEG2000 and H.264/MPEG-4 SVC, allow for adaptation to be performed with relatively low computational cost. We show that accounting for MDM can significantly improve system performance; furthermore, by combining this with scalable coding, this can be done efficiently. Given these MDM, we propose an algorithm to generate embedded schedules, which enables low-complexity, adaptive streaming of scalable media packets to minimize distortion across multiple users. We show that using MDM achieves up to 4 dB gains for spatial scalability applied to images and 12 dB gains for emporal scalability applied to video.   相似文献   

4.
The Digital Repository of Ireland (DRI) is Ireland’s national trusted digital repository for the social and cultural, historical and contemporary data held by Irish institutions. DRI provides users with a bilingual (Irish and English) user interface at all user access levels, and provides innovative ways to process and display bilingual metadata. This article details our experience in enriching the bilingual metadata and developing the bilingual features of the repository. We present solutions to some of the linguistic and technical challenges we faced and provide recommendations to developers and archivists on how best to prepare bilingual content for contemporary archival repositories.  相似文献   

5.
Virtualization is a common strategy for improving the utilization of existing computing resources, particularly within data centers. However, its use for high performance computing (HPC) applications is currently limited despite its potential for both improving resource utilization as well as providing resource guarantees to its users. In this article, we systematically evaluate three major virtual machine implementations for computationally intensive HPC applications using various standard benchmarks. Using VMWare Server, Xen, and OpenVZ, we examine the suitability of full virtualization (VMWare), paravirtualization (Xen), and operating system-level virtualization (OpenVZ) in terms of network utilization, SMP performance, file system performance, and MPI scalability. We show that the operating system-level virtualization provided by OpenVZ provides the best overall performance, particularly for MPI scalability. With the knowledge gained by our VM evaluation, we extend OpenVZ to include support for checkpointing and fault-tolerance for MPI-based virtual server distributed computing.  相似文献   

6.
7.
The viewing of video increasingly occurs in a wide range of public and private environments via a range of static and mobile devices. The proliferation of content on demand and the diversity of the viewing situations means that delivery systems can play a key role in introducing audiences to contextually relevant content of interest whilst maximising the viewing experience for individual viewers. However, for video delivery systems to do this, they need to take into account the diversity of the situations where video is consumed, and the differing viewing experiences that users desire to create within them. This requires an ability to identify different contextual viewing situations as perceived by users. This paper presents the results from a detailed, multi-method, user-centred field study with 11 UK-based users of video-based content. Following a review of the literature (to identify viewing situations of interest on which to focus), data collection was conducted comprising observation, diaries, interviews and self-captured video. Insights were gained into whether and how users choose to engage with content in different public and private spaces. The results identified and validated a set of contextual cues that characterise distinctive viewing situations. Four archetypical viewing situations were identified: ‘quality time’, ‘opportunistic planning’, ‘sharing space but not content’ and ‘opportunistic self-indulgence’. These can be differentiated in terms of key contextual factors: solitary/shared experiences, public/private spaces and temporal characteristics. The presence of clear contextual cues provides the opportunity for video delivery systems to better tailor content and format to the viewing situation or additionally augment video services through social media in order to provide specific experiences sensitive to both temporal and physical contexts.  相似文献   

8.
CDNs improve network performance and offer fast and reliable applications and services by distributing content to cache servers located close to users. The Web's growth has transformed communications and business services such that speed, accuracy, and availability of network-delivered content has become absolutely critical - both on their own terms and in terms of measuring Web performance. Proxy servers partially address the need for rapid content delivery by providing multiple clients with a shared cache location. In this context, if a requested object exists in a cache (and the cached version has not expired), clients get a cached copy, which typically reduces delivery time. CDNs act as trusted overlay networks that offer high-performance delivery of common Web objects, static data, and rich multimedia content by distributing content load among servers that are close to the clients. CDN benefits include reduced origin server load, reduced latency for end users, and increased throughput. CDNs can also improve Web scalability and disperse flash-crowd events. Here we offer an overview of the CDN architecture and popular CDN service providers.  相似文献   

9.
COSMOS文件系统的性能分析   总被引:4,自引:0,他引:4  
杜聪  徐志伟 《计算机学报》2001,24(7):702-709
COSMOS文件系统提供单一系统映象和严格的UNIX语义,提供与UNIX文件系统的应用程序二进制兼容,系统中没有引入集中的服务器瓶颈,所有的数据、元数据和目录文件都被分散存储在整个系统中,以提供高性能和良好的可扩展性,测试表明,COSMOS文件系统具有很好的系统带宽和整体性能,系统具有良好的可扩展性,文中讨论了影响系统性能和可扩展性的关键因素,基于作者的实现经验和测试数据,讨论了现有系统中存在的性能瓶颈并且提出了改进的方案。  相似文献   

10.
The role of modeling in the performance testing of e-commerce applications   总被引:1,自引:0,他引:1  
An e-commerce scalability case study is presented in which both traditional performance testing and performance modeling were used to help tune the application for high performance. This involved the creation of a system simulation model as well as the development of an approach for test case generation and execution. We describe our experience using a simulation model to help diagnose production system problems, and discuss ways that the effectiveness of performance testing efforts was improved by its use.  相似文献   

11.
Wikis—being major applications of the Web 2.0—are used for a large number of purposes, such as encyclopedias, project documentation, and coordination, both in open communities and in enterprises. At the application level, users are targeted as both consumers and producers of dynamic content. Yet, this kind of peer‐to‐peer (P2P) principle is not used at the technical level being still dominated by traditional client–server architectures. What lacks is a generic platform that combines the scalability of the P2P approach with, for example, a wiki's requirements for consistent content management in a highly concurrent environment. This paper presents a flexible content repository system that is intended to close the gap by using a hybrid P2P overlay to support scalable, fault‐tolerant, consistent, and efficient data operations for the dynamic content of wikis. On the one hand, this paper introduces the generic, overall architecture of the content repository. On the other hand, it describes the major building blocks to enable P2P data management at the system's persistent storage layer, and how these may be used to implement a P2P‐based wiki application: (i) a P2P back‐end administrates a wiki's actual content resources. (ii) On top, P2P service groups act as indexing groups to implement a wiki's search index. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
Modern scientific applications often need to be distributed across Grids. Increasingly applications rely on services, such as job submission, data transfer or data portal services. We refer to such services as Grid services. While the invocation of Grid services could be hard coded in theory, scientific users want to orchestrate service invocations more flexibly. In enterprise applications, the orchestration of web services is achieved using emerging orchestration standards, most notably the Business Process Execution Language (BPEL). We describe our experience in orchestrating scientific workflows using BPEL. We have gained this experience during an extensive case study that orchestrates Grid services for the automation of a polymorph prediction application. Using this example, we explain the extent with which the BPEL language supports the definition of scientific workflows. We then describe the reliability, performance and scalability that can be achieved by executing a complex scientific workflow with ActiveBPEL, an industrial strength but freely available BPEL engine. *The work has been funded by the UK EPSRC through grants GR/R97207/01 (e-Materials) and GR/S90843/01 (OMII Managed Programme).  相似文献   

13.
Continuous media, such as digital movies, video clips, and music, are becoming an increasingly common way to convey information, entertain and educate people. However, limited system and network resources have delayed the widespread usage of continuous media. Most existing on-demand services are not scalable for a large content repository. In this paper, we propose a scalable and inexpensive video delivery scheme, named Scheduled Video Delivery (SVD). In the SVD scheme, users submit requests with a specified start time. Incentives are provided so that users will specify the start times that reflect their real needs. The SVD system combines requests to form the multicasting groups and schedules these groups to meet the deadline. SVD scheduling has a different objective from many existing scheduling schemes. Its focus has been shifted from minimizing the waiting time toward meeting deadlines and at the same time combining requests to form multicasting groups. SVD compliments most existing video delivery schemes as it can be combined with them. It requires much less resources than other schemes. Simulation study for the SVD performance and the comparison to other schemes are presented.  相似文献   

14.
The proliferation of cloud services and other forms of service‐oriented computing continues to accelerate. Alongside this development is an ever‐increasing need for storage within the data centres that host these services. Management applications used by cloud providers to configure their infrastructure should ideally operate in terms of high‐level policy goals, and not burden administrators with the details presented by particular instances of storage systems. One common technology used by cloud providers is the Storage Area Network (SAN). Support for seamless scalability is engineered into SAN devices. However, SAN infrastructure has a very large parameter space: their optimal deployment is a difficult challenge, and subsequent management in cloud storage continues to be difficult. parindent = 10pt In this article, we discuss our work in SAN configuration middleware, which aims to provide users of large‐scale storage infrastructure such as cloud providers with tools to assist them in their management and evolution of heterogeneous SAN environments. We propose a middleware rather than a stand‐alone tool so that the middleware can be a proxy for interacting with, and informing, a central repository of SAN configurations. Storage system users can have their SAN configurations validated against a knowledge base of best practices that are contained within the central repository. Desensitized information is exported from local management applications to the repository, and the local middleware can subscribe to updates that proactively notify storage users should particular configurations be updated to be considered as sub‐optimal, or unsafe. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
Content management system (CMS) is an infrastructure for efficient distribution, organization, and delivery of digital content. It is desirable that the content must be successfully delivered regardless of the end users location or attachment network. For the end to end delivery of content, a virtual open content delivery infrastructure is formed by interconnecting several CDNs. In this paper, we focus on content delivery network interconnection. An efficient and suitable to implement hierarchical CDNI architecture, named as HCDNI, is proposed to reduce the limitations of CDNIs. Next, a content distribution and redistribution scheme is proposed so that the searching time and the round trip time for the content delivery can be minimized. Next, we find a reliable and fault tolerant scheme for web server replica placement and content caching. Finally, analysis and simulation studies show that proposed algorithm results in a significant improvement in terms of data routing, path selection, content distribution and redistribution, load balancing and network scalability.  相似文献   

16.
17.
We consider on-line construction of the suffix tree for a parameterized string, where we always have the suffix tree of the input string read so far. This situation often arises from source code management systems where, for example, a source code repository is gradually increasing in its size as users commit new codes into the repository day by day. We present an on-line algorithm which constructs a parameterized suffix tree in randomized O(n) time, where n is the length of the input string. Our algorithm is the first randomized linear time algorithm for the on-line construction problem.  相似文献   

18.
《IT Professional》2001,3(2):41-45
Systems based on a proper public-key infrastructure (PKI) architecture offer the missing trust and interoperability necessary for e-commerce expansion. As the number of services available to users continues to increase, so will the need to maintain the user's identity in a secured, trusted manner. The user name and password concept has worked thus far but lacks the portability and scalability that global e-commerce demands. An interoperable PKI system that offers trust services between users will become a common industry practice. We envision that the future global e-commerce system should work with various devices, from desktops to handheld computers. Eventually one certificate will represent an individual across multiple services and devices  相似文献   

19.
To improve the efficiency and the quality of a service, a network operator may consider deploying a peer-to-peer architecture among controlled peers, also called here nano data centers, which contrast with the churn and resource heterogeneity of peers in uncontrolled environments. In this paper, we consider a prevalent peer-to-peer application: live video streaming. We demonstrate how nano data centers can take advantage of the self-scaling property of a peer-to-peer architecture, while significantly improving the quality of a live video streaming service, allowing smaller delays and fast channel switching. We introduce the branching architecture for nano datacenters (BAND), where a user can “pull” content from a channel of interest, or content could be “pushed” to it for relaying to other interested users. We prove that there exists an optimal trade-off point between minimizing the number of push, or the number of relaying nodes, and maintaining a robust topology as the number of channels and users get large, which allows scalability. We analyze the performance of content dissemination as users switch between channels, creating migration of nodes in the tree, while flow control insures continuity of data transmission. We prove that this p2p architecture guarantees a throughput independently of the size of the group. Analysis and evaluation of the model demonstrate that pushing content to a small number of relay nodes can have significant performance gains in throughput, start-up time, playback lags and channel switching delays.  相似文献   

20.
Representation components of user modeling servers have been traditionally based on simple file structures and database systems. We propose directory systems as an alternative, which offer numerous advantages over the more traditional approaches: international vendor-independent standardization, demonstrated performance and scalability, dynamic and transparent management of distributed information, built-in replication and synchronization, a rich number of pre-defined types of user information, and extensibility of the core representation language for new information types and for data types with associated semantics. Directories also allow for the virtual centralization of distributed user models and their selective centralized replication if better performance is needed. We present UMS, a user modeling server that is based on the Lightweight Directory Access Protocol (LDAP). UMS allows for the representation of different models (such as user and usage profiles, and system and service models), and for the attachment of arbitrary components that perform user modeling tasks upon these models. External clients such as user-adaptive applications can submit and retrieve information about users. We describe a simulation experiment to test the runtime performance of this server, and present a theory of how the parameters of such an experiment can be derived from empirical web usage research. The results show that the performance of UMS meets the requirements of current small and medium websites already on very modest hardware platforms, and those of very large websites in an entry-level business server configuration.The UMUAI managing editor for this paper was Sandra Carberry, University of Delaware.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号