首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Running applications in the cloud efficiently requires much more than deploying software in virtual machines. Cloud applications have to be continuously managed: (1) to adjust their resources to the incoming load and (2) to face transient failures replicating and restarting components to provide resiliency on unreliable infrastructure. Continuous management monitors application and infrastructural metrics to provide automated and responsive reactions to failures (health management) and changing environmental conditions (auto-scaling) minimizing human intervention.In the current practice, management functionalities are provided as infrastructural or third party services. In both cases they are external to the application deployment. We claim that this approach has intrinsic limits, namely that separating management functionalities from the application prevents them from naturally scaling with the application and requires additional management code and human intervention. Moreover, using infrastructure provider services for management functionalities results in vendor lock-in effectively preventing cloud applications to adapt and run on the most effective cloud for the job.In this paper we discuss the main characteristics of cloud native applications, propose a novel architecture that enables scalable and resilient self-managing applications in the cloud, and relate on our experience in porting a legacy application to the cloud applying cloud-native principles.  相似文献   

2.
3.
Computational clouds are increasingly becoming popular for the provisioning of computing resources and service on demand basis. As a backbone in computational clouds, a set of applications are configured over virtual machines running on a large number of server machines in data center networks (DCNs). Currently, DCNs use tree-based architecture which inherits the problems of limited bandwidth capacity and lower server utilization. This requires a new design of scalable and inexpensive DCN infrastructure which enables high-speed interconnection for exponentially increasing number of client devices and provides fault-tolerant and high network capacity. In this paper, we propose a novel architecture for DCN which uses Sierpinski triangle fractal to mitigate throughput bottleneck in aggregate layers as accumulated in tree-based structure. Sierpinski Triangle Based (STB) is a fault-tolerant architecture which provides at least two parallel paths for each pair of servers. The proposed architecture is evaluated in NS2 simulation. The performance of STB-based architecture is then validated by comparing the results with DCell and BCube DCN architecture. Theoretical analysis and simulation results verify that the proportion of switches to servers is 0.167 in STB, lower than BCube (3.67); the average shortest path length is limited between 5.0 and 6.7, whenever node failure proportion remains between 0.02 and 0.2, shorter than DCell and BCube in a four-level architecture. Network throughput is also increased in STB, which spends 87 s to transfer data than DCell and BCube in a given condition. The simulation results validate the significance of STB based DCN architecture for datacenter in computational clouds.  相似文献   

4.
针对无线多接口多信道Mesh网络环境,对两种多径路由模型进行比较分析。根据支持多接口多信道的无线设备的特点,提出了一套较为高效的路由方式。该方案从源节点到目的节点利用提出的接口分配策略,从不同的接口和信道建立多条可同时工作的路由和多条不重叠的备份路由以大幅提升网络性能。它充分利用了多接口多信道移动设备的性能优势,更适合Mesh网络的拓扑特点。仿真测试表明,使用该方案,在端到端时延,网络吞吐量等方面提供了更佳的性能。  相似文献   

5.
6.
Demanding ever-increasing throughput and processing power, space applications push the outer limits of conventional pattern-recognition technology. Optical correlators offer a siren's song of potential advantages over all-electronic devices. Realizing these advantages, however, will be difficult because of limitations on current spatial light modulators, key elements in an optical correlator. We propose a promising architecture that may overcome these limitations  相似文献   

7.
InfiniRand (IB) is a high speed, channel based interconnecting technology between systems and devices. It will remove the bottleneck limiting the performance of current servers and storage divices. This article proposes an IP over IrdiniBand (IPoIB) architecture and its application. It will function as a bridge between IP and IB architecture insystems IP and IB coexists. It enables the applications to take advantages of IB without changes in upper layers IP and other protocols. We also introduce the IPoIB protocol and show some experiment results in this paper. The experi-ment results show that IPoIB is suitable to be used in the front-end servers and interface devices for the IrdiniBand-based network servers.  相似文献   

8.
9.
In recent times, the Internet of Things (IoT) applications, including smart transportation, smart healthcare, smart grid, smart city, etc. generate a large volume of real-time data for decision making. In the past decades, real-time sensory data have been offloaded to centralized cloud servers for data analysis through a reliable communication channel. However, due to the long communication distance between end-users and centralized cloud servers, the chances of increasing network congestion, data loss, latency, and energy consumption are getting significantly higher. To address the challenges mentioned above, fog computing emerges in a distributed environment that extends the computation and storage facilities at the edge of the network. Compared to centralized cloud infrastructure, a distributed fog framework can support delay-sensitive IoT applications with minimum latency and energy consumption while analyzing the data using a set of resource-constraint fog/edge devices. Thus our survey covers the layered IoT architecture, evaluation metrics, and applications aspects of fog computing and its progress in the last four years. Furthermore, the layered architecture of the standard fog framework and different state-of-the-art techniques for utilizing computing resources of fog networks have been covered in this study. Moreover, we included an IoT use case scenario to demonstrate the fog data offloading and resource provisioning example in heterogeneous vehicular fog networks. Finally, we examine various challenges and potential solutions to establish interoperable communication and computation for next-generation IoT applications in fog networks.  相似文献   

10.
认知增强型无线传感器节点设计   总被引:3,自引:0,他引:3  
为了提高无线传感器节点及其网络的吞吐量和频谱利用率,从节点设计的角度,引入认知方法增强节点的频谱感知能力,设计了基于低功耗高速率处理器的多射频接口认知增强无线传感器节点.节点采用STR911系列的ARM9微处理器,具有4个射频接口,覆盖了ISM频段和ZigBee频段.实验结果表明,相比基于Atmega128处理器的节点,该节点具有更强的认知能力,吞吐量提高了68.41%,平均信道感知时延缩短了1.4782ms;相比于CSMA/CA方法,通信时延缩短了11.86%,链路层控制方案能够有效避免干扰对节点间通信的影响.  相似文献   

11.
Server performance is one of the critical activities in the data grid environment. A large number of applications require access to huge volumes of data from grid servers. In this case, efficient, scalable and robust grid server which can deal with large file transfer concurrent is needed. In this paper, we analyze the bottleneck of our grid servers and introduce user-space I/O scheduling, zero copy and event-driven architecture in our grid server to improve the servers’ performance. The user-space I/O scheduling can save almost 50% I/O time in a huge number of small files transfer. Grid servers can elimination CPU consumptions between kernel and user space by zero copy and cut 63% times for context switches. Event-driven architecture will save 30% CPU usage to reach the best performance by thread-driven architecture. Optimization method combination of these three above are used in our grid servers, the full-load throughput of our system is 30% more than traditional solutions and only 60% CPU consumed compared with traditional solutions.  相似文献   

12.
The multiple inputs multiple output (MIMO) architecture supports smart antennas and MIMO links is now a popular technique for exploiting the multi-path, spatial multiplexing, and diversity gain to provide high spectral efficiencies and performance improvement in wireless ad hoc networks. In this work, we propose a new multi-path on demand quality-of-service (QoS) routing architecture, looked like a bow and called as bow structure, in MIMO ad hoc networks. A bow-based MIMO ad hoc networks routing protocol, named as BowQR, is also proposed to support QoS requirement and to improve the transmission efficiency. Each bow structure is composed of rate-links and/or range-links on demand to provide multi-path routing and satisfy the bandwidth requirement. Two types of transmission links, the rate-link and range-link, exploit the spatial multiplexing and spatial diversity to provide extremely high spectral efficiencies and increase the transmission range. Finally, the simulation results show that our BowQR protocol achieves the performance improvements in throughput, success rate, and average latency.  相似文献   

13.
Emerging applications like C3I systems, real-time databases, data acquisition systems and multimedia servers require access to secondary storage devices under timing constraints. In this paper, we focus on operating system support needed for managing real-time disk traffic with hard deadlines. We present the design and implementation of a preemptive deadline-driven disk I/O subsystem suitable for real-time disk traffic management. Preemptibility is achieved with a granularity that is automatically controlled by the I/O subsystem according to current workload and filesystem data layout. An admission control test checks the current resource availability for a given workload. We show that contiguous layout is necessary to maintain hard real-time guarantees and a reasonable level of disk throughput. Finally, we show how buffering can be used to obtain utilization factors close to the maximum disk bandwidth possible.  相似文献   

14.
A load balanced cluster is an abstraction for set of servers that are configured to share the workload. Each server of the cluster hosts the same set of applications or services. The process of application deployment is laborious and it is essential to efficiently manage a cluster. The existing techniques allow automation of the initial deployment and dynamic scaling. However, after the initial deployment, they do not ensure cluster members’ consistency. This is quite important, as change is inevitable in the life of a software application. A new application may need to be deployed to an existing cluster or an existing application may need to be upgraded. In this paper, we propose a Model View Controller based, Self-Adjusting Cluster Framework (SACF) that enables auto deployment, auto upgradation and cluster members’ consistency.  相似文献   

15.
Healing Web applications through automatic workarounds   总被引:1,自引:0,他引:1  
We develop the notion of automatic workaround in the context of Web applications. A workaround is a sequence of operations, applied to a failing component, that is equivalent to the failing sequence in terms of its intended effect, but that does not result in a failure. We argue that workarounds exist in modular systems because components often offer redundant interfaces and implementations, which in turn admit several equivalent sequences of operations. In this paper, we focus on Web applications because these are good and relevant examples of component-based (or service-oriented) applications. Web applications also have attractive technical properties that make them particularly amenable to the deployment of automatic workarounds. We propose an architecture where a self-healing proxy applies automatic workarounds to a Web application server. We also propose a method to generate equivalent sequences and to represent and select them at run-time as automatic workarounds. We validate the proposed architecture in four case studies in which we deploy automatic workarounds to handle four known failures in to the popular Flickr and Google Maps Web applications. This work has been supported by the project PerSeoS funded by the Swiss National Fund.  相似文献   

16.
ContextMost companies, independently of their size and activity type, are facing the problem of managing, maintaining and/or replacing (part of) their existing software systems. These legacy systems are often large applications playing a critical role in the company’s information system and with a non-negligible impact on its daily operations. Improving their comprehension (e.g., architecture, features, enforced rules, handled data) is a key point when dealing with their evolution/modernization.ObjectiveThe process of obtaining useful higher-level representations of (legacy) systems is called reverse engineering (RE), and remains a complex goal to achieve. So-called Model Driven Reverse Engineering (MDRE) has been proposed to enhance more traditional RE processes. However, generic and extensible MDRE solutions potentially addressing several kinds of scenarios relying on different legacy technologies are still missing or incomplete. This paper proposes to make a step in this direction.MethodMDRE is the application of Model Driven Engineering (MDE) principles and techniques to RE in order to generate relevant model-based views on legacy systems, thus facilitating their understanding and manipulation. In this context, MDRE is practically used in order to (1) discover initial models from the legacy artifacts composing a given system and (2) understand (process) these models to generate relevant views (i.e., derived models) on this system.ResultsCapitalizing on the different MDRE practices and our previous experience (e.g., in real modernization projects), this paper introduces and details the MoDisco open source MDRE framework. It also presents the underlying MDRE global methodology and architecture accompanying this proposed tooling.ConclusionMoDisco is intended to make easier the design and building of model-based solutions dedicated to legacy systems RE. As an empirical evidence of its relevance and usability, we report on its successful application in real industrial projects and on the concrete experience we gained from that.  相似文献   

17.
Cyber–Physical convergence, the fast expansion of the Internet at its edge, and tighter interactions between human users and their personal mobile devices push towardan Internet where the human user becomes more central than ever, and where their personal devices become their proxies in the cyber world, in addition to acting as a fundamental tool to sense the physical world. The current Internet paradigm, which is infrastructure-centric, is not the right one to cope with such emerging scenario with a wider range of applications. This calls for a radically new Internet paradigm, that we name the Internet of People (IoP), where the humans and their personal devices are not seen merely as end users of applications, but become active elements of the Internet. Note that IoP is not a replacement of the current Internet infrastructure, but it exploits legacy Internet services as (reliable) primitives to achieve end-to-end connectivity on a global-scale. In this visionary paper, we first discuss the key features of the IoP paradigm along with the underlying research issues and challenges. Then we present emerging networking and computing paradigms that are anticipating IoP.  相似文献   

18.
Until recently, research on cellular networks concentrated only in single-hop cellular networks. The demand for high throughput has driven to architectures that use multiple hops in the presence of infrastructure. We propose an architecture for multihop cellular networks (MCNs). MCNs combine the benefits of having a fixed infrastructure of base stations and the flexibility of Ad hoc networks. They are capable of achieving much higher throughput than current cellular systems, which can be classified as single-hop cellular networks (SCNs). In this work, we propose an extended architecture for MCN using the IEEE 802.11 standard for wireless LANs for connection-less service and a TDMA-based solution for real-time support. We provide a general overview of the architecture and the issues involved in the design of MCNs, in particular the challenges to be met in the design of a routing protocol, a channel assignment scheme, and a mobility management scheme. We also propose a routing protocol called Base-Assisted Ad hoc Routing (BAAR) protocol for use in such networks and a model for the performance analysis of MCNs and SCNs. We also conduct extensive experimental studies on the performance of MCNs and SCNs under various load (TCP, UDP, and real-time sessions) and mobility conditions. These studies clearly indicate that MCNs with the proposed architecture and routing protocol are viable alternatives for SCNs, in fact they provide much higher throughput. MCNs are very attractive for best-effort packet radio where they can achieve an increase in throughput up to four when compared to similar SCNs. But for real-time traffic, even though they do outperform SCNs, they also suffer from a few disadvantages such as frequent hand-offs and throughput degradation at high mobility. We also present results from a detailed comparison study of our architecture for MCN with the Hybrid Wireless Network (HWN) architecture and Integrated Cellular Ad hoc Relaying (iCAR) Architecture.  相似文献   

19.
QoS (Quality of Service), which defines service quality such as latency, availability, timeliness and reliability, is important for web applications that provide real-time information, multimedia content, or time-critical services. Many web applications are best implemented by servers with a guaranteed server processing capacity. In this research, we study the QoS control issues using the current Web services standards. We propose a QoS-capable Web service architecture, QCWS, by deploying a QoS broker between Web service clients and providers. The functions of the QoS broker module include tracking QoS information about servers, making selection decisions for clients, and negotiating with servers to get QoS agreements. We have implemented a QCWS prototype using IBM WSDK, enhanced with simple QoS capabilities. We have measured the performance running under different service priorities.  相似文献   

20.
Recent development in Graphics Processing Units (GPUs) has enabled inexpensive high performance computing for general-purpose applications. Compute Unified Device Architecture (CUDA) programming model provides the programmers adequate C language like APIs to better exploit the parallel power of the GPU. Data mining is widely used and has significant applications in various domains. However, current data mining toolkits cannot meet the requirement of applications with large-scale databases in terms of speed. In this paper, we propose three techniques to speedup fundamental problems in data mining algorithms on the CUDA platform: scalable thread scheduling scheme for irregular pattern, parallel distributed top-k scheme, and parallel high dimension reduction scheme. They play a key role in our CUDA-based implementation of three representative data mining algorithms, CU-Apriori, CU-KNN, and CU-K-means. These parallel implementations outperform the other state-of-the-art implementations significantly on a HP xw8600 workstation with a Tesla C1060 GPU and a Core-quad Intel Xeon CPU. Our results have shown that GPU + CUDA parallel architecture is feasible and promising for data mining applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号