首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
With the proliferation of Internet of Things (IoT) and edge computing paradigms, billions of IoT devices are being networked to support data-driven and real-time decision making across numerous application domains, including smart homes, smart transport, and smart buildings. These ubiquitously distributed IoT devices send the raw data to their respective edge device (eg, IoT gateways) or the cloud directly. The wide spectrum of possible application use cases make the design and networking of IoT and edge computing layers a very tedious process due to the: (i) complexity and heterogeneity of end-point networks (eg, Wi-Fi, 4G, and Bluetooth); (ii) heterogeneity of edge and IoT hardware resources and software stack; (iv) mobility of IoT devices; and (iii) the complex interplay between the IoT and edge layers. Unlike cloud computing, where researchers and developers seeking to test capacity planning, resource selection, network configuration, computation placement, and security management strategies had access to public cloud infrastructure (eg, Amazon and Azure), establishing an IoT and edge computing testbed that offers a high degree of verisimilitude is not only complex, costly, and resource-intensive but also time-intensive. Moreover, testing in real IoT and edge computing environments is not feasible due to the high cost and diverse domain knowledge required in order to reason about their diversity, scalability, and usability. To support performance testing and validation of IoT and edge computing configurations and algorithms at scale, simulation frameworks should be developed. Hence, this article proposes a novel simulator IoTSim-Edge, which captures the behavior of heterogeneous IoT and edge computing infrastructure and allows users to test their infrastructure and framework in an easy and configurable manner. IoTSim-Edge extends the capability of CloudSim to incorporate the different features of edge and IoT devices. The effectiveness of IoTSim-Edge is described using three test cases. Results show the varying capability of IoTSim-Edge in terms of application composition, battery-oriented modeling, heterogeneous protocols modeling, and mobility modeling along with the resources provisioning for IoT applications.  相似文献   

2.
The needs for efficient and scalable community health awareness model become a crucial issue in today’s health care applications. Many health care service providers need to provide their services for long terms, in real time and interactively. Many of these applications are based on the emerging Wireless Body Area networks (WBANs) technology. WBANs have developed as an effective solution for a wide range of healthcare, military, sports, general health and social applications. On the other hand, handling data in a large scale (currently known as Big Data) requires an efficient collection and processing model with scalable computing and storage capacity. Therefore, a new computing paradigm is needed such as Cloud Computing and Internet of Things (IoT). In this paper we present a novel cloud supported model for efficient community health awareness in the presence of a large scale WBANs data generation. The objective is to process this big data in order to detect the abnormal data using MapReduce infrastructure and user defined functions with minimum processing delay. The goal is to have a large monitored data of WBANs to be available to the end user or to the decision maker in reliable manner. While reducing data packet processing energy, the proposed work is minimizing the data processing delay by choosing cloudlet or local cloud model and MapReduce infrastructure. So, the overall delay is minimized, thus leading to detect the abnormal data in the cloud in real time mode. In this paper we present a multi-layer computing model composed of Local Cloud (LC) layer and Enterprise Cloud (EP) layer that aim to process the collected data from Monitored Subjects (MSs) in a large scale to generate useful facts, observations or to find abnormal phenomena within the monitored data. Performance results show that integrating the MapReduce capabilities with cloud computing model will reduce the processing delay. The proposed MapReduce infrastructure has also been applied in lower layer, such as LC in order to reduce the amount of communications and processing delay. Performance results show that applying MapReduce infrastructure in lower tire will significantly decrease the overall processing delay.  相似文献   

3.
In recent times, the Internet of Things (IoT) applications, including smart transportation, smart healthcare, smart grid, smart city, etc. generate a large volume of real-time data for decision making. In the past decades, real-time sensory data have been offloaded to centralized cloud servers for data analysis through a reliable communication channel. However, due to the long communication distance between end-users and centralized cloud servers, the chances of increasing network congestion, data loss, latency, and energy consumption are getting significantly higher. To address the challenges mentioned above, fog computing emerges in a distributed environment that extends the computation and storage facilities at the edge of the network. Compared to centralized cloud infrastructure, a distributed fog framework can support delay-sensitive IoT applications with minimum latency and energy consumption while analyzing the data using a set of resource-constraint fog/edge devices. Thus our survey covers the layered IoT architecture, evaluation metrics, and applications aspects of fog computing and its progress in the last four years. Furthermore, the layered architecture of the standard fog framework and different state-of-the-art techniques for utilizing computing resources of fog networks have been covered in this study. Moreover, we included an IoT use case scenario to demonstrate the fog data offloading and resource provisioning example in heterogeneous vehicular fog networks. Finally, we examine various challenges and potential solutions to establish interoperable communication and computation for next-generation IoT applications in fog networks.  相似文献   

4.
Cloud computing is a powerful technology to perform massive-scale and complex computing. It eliminates the need to maintain expensive computing hardware, dedicated space, and software. Massive growth in the scale of data or big data generated through cloud computing has been observed. Addressing big data is a challenging and time-demanding task that requires a large computational infrastructure to ensure successful data processing and analysis. The rise of big data in cloud computing is reviewed in this study. The definition, characteristics, and classification of big data along with some discussions on cloud computing are introduced. The relationship between big data and cloud computing, big data storage systems, and Hadoop technology are also discussed. Furthermore, research challenges are investigated, with focus on scalability, availability, data integrity, data transformation, data quality, data heterogeneity, privacy, legal and regulatory issues, and governance. Lastly, open research issues that require substantial research efforts are summarized.  相似文献   

5.
边缘计算可以通过将计算移到边缘设备上来提高大型物联网流数据处理质量以及降低网络运行成本.对于流数据处理,边缘设备通常只有有限的计算能力和存储能力,显然不能支持所有的实时流数据查询和处理.本文尝试引入服务并在边缘和云之间灵活地划分服务来实现云-端集成,云服务和端服务之间通过事件机制进行服务适配.物联网动态环境中,云-端服务的动态适配是使云基础设施和端设备间无缝集成的关键.动态集成背景下的服务适配需要把握适配时机来应对端服务适配请求的不确定性和非完全适配等难题.针对这一问题,论文提出了一种面向云-端动态集成的服务适配方法(Dynamic Adaption cloud Services with Edge Services,DANCE).这种方法的主要贡献在于:将云服务实例和端服务实例之间的适配问题建模为二分图顶点之间的动态匹配问题,同时结合排队论中的M/M/c/∞模型对二分图最优匹配Kuhn-Munkres算法进行了优化改进,保障适配过程中端服务实例的全局平均请求响应时间最小.最后,基于真实的电能质量监控案例和数据,验证了本文方法的有效性.  相似文献   

6.
Recently, as the mobile phone technology has been developed, apps used on mobile phones have also been developed rapidly in P2P cloud computing environment. In particular, as the IoT(Internet of Things) technology has been grafted on mobile phones, large and small data have been diversified centering on P2P computing environments. However, due to the diverse sizes and uses of big data used in P2P environments, users have many complaints regarding the low accuracy of big data search results and service delay. Previous studies have been actively researched to develop a technology that can cheaply construct and efficiently utilize IT infrastructure for big data processing. In particular, research has been conducted on technologies in which a large number of servers distribute and manage a huge amount of data generated or processed from a portable device such as a smartphone. However, existing researchers do not only store and manage big data, but also need additional data latency and service delay prevention technology to cope with various problems flexibly and to prevent service interruption. Currently, there are many researchers, but until now, there has been no research to satisfy user’s search delay time and accuracy by using size and usage of big data. This paper proposes a fast Fourier transform-based efficient data processing scheme so that users can accurately search for desired data out of different kinds (size, use, type, etc.) of big data in P2P cloud computing environment. The proposed scheme utilizes the keywords used by users to search big data as coefficients of polynomial expressions with a view to enhancing polynomial expression transformation speeds. In addition, the proposed scheme organized the coefficients of polynomial expressions that constitute subnets in pairs with probability values for processing in linkage with each other to enhance data accessibility. In particular, the proposed scheme transforms the vectors shown using the coefficients of polynomial expressions in pairs with the polynomial expressions so that searched data can be quickly identified thereby minimizing user service delay time. As a result of the performance evaluation, the data processing time was improved by 7.3% on average compared with the existing techniques and the server’s data processing rate per unit time was improved by 11.1% on average compared to existing techniques. In addition, according to the data size, the communication delay time between the server and the user improved by an average of 8.9% and server overhead was 10.4% lower than the existing techniques on average.  相似文献   

7.
边缘计算可以通过将计算转移至边缘设备,以提高大型物联网流数据的处理质量并降低网络运行成本。然而,实现大型流数据云计算和边缘计算的集成面临两个挑战。首先,边缘设备的计算能力和存储能力有限,不能支持大规模流数据的实时处理。其次,流数据的不可预测性导致边缘端的协作不断地发生变化。因此,有必要实现边缘服务和云服务之间的灵活划分。提出一种面向服务的云端与边缘端的无缝集成方法,用于实现大规模流数据云计算和边缘计算的协作。该方法将云服务分成两部分,分别在云端和边缘端上运行。同时,提出了一种基于改进的二分图动态服务调度机制。当产生事件时,可以在适当的时间将云服务部署到边缘节点。基于真实的电能质量监控数据对提出的方法进行了有效性验证。  相似文献   

8.
Making resources closer to the user might facilitate the integration of new technologies such as edge, fog, cloud computing, and big data. However, this brings many challenges shall be overridden when distributing a real‐time stream processing, executing multiapplication in a safe multitenant environment, and orchestrating and managing the services and resources into a hybrid fog/cloud federation. In this article, first, we propose a business process model and notation (BPMN) extension to enable the Internet of Things (IoT)‐aware business process (BP) modeling. The proposed extension takes into consideration the heterogeneous IoT and non‐IoT resources, resource capacities, quality of service constraints, and so forth. Second, we present a new IoT‐fog‐cloud based architecture, which (i) supports the distributed inter and intralayer communication as well as the real‐time stream processing in order to treat immediately IoT data and improve the entire system reliability, (ii) enables the multiapplication execution within a multitenancy architecture using the single sign‐on technique to guarantee the data integrity within a multitenancy environment, and (iii) relies on the orchestration and federation management services for deploying BP into the appropriate fog and/or cloud resources. Third, we model, by using the proposed BPMN 2.0 extension, smart autistic child and coronavirus disease 2019 monitoring systems. Then we propose the prototypes for these two smart systems in order to carry out a set of extensive experiments illustrating the efficiency and effectiveness of our work.  相似文献   

9.
It is predicted by the year 2020, more than 50 billion devices will be connected to the Internet. Traditionally, cloud computing has been used as the preferred platform for aggregating, processing, and analyzing IoT traffic. However, the cloud may not be the preferred platform for IoT devices in terms of responsiveness and immediate processing and analysis of IoT data and requests. For this reason, fog or edge computing has emerged to overcome such problems, whereby fog nodes are placed in close proximity to IoT devices. Fog nodes are primarily responsible of the local aggregation, processing, and analysis of IoT workload, thereby resulting in significant notable performance and responsiveness. One of the open issues and challenges in the area of fog computing is efficient scalability in which a minimal number of fog nodes are allocated based on the IoT workload and such that the SLA and QoS parameters are satisfied. To address this problem, we present a queuing mathematical and analytical model to study and analyze the performance of fog computing system. Our mathematical model determines under any offered IoT workload the number of fog nodes needed so that the QoS parameters are satisfied. From the model, we derived formulas for key performance metrics which include system response time, system loss rate, system throughput, CPU utilization, and the mean number of messages request. Our analytical model is cross-validated using discrete event simulator simulations.  相似文献   

10.
The handling of complex tasks in IoT applications becomes difficult due to the limited availability of resources in most IoT devices. There arises a need to offload the IoT tasks with huge processing and storage to resource enriched edge and cloud. In edge computing, factors such as arrival rate, nature and size of task, network conditions, platform differences and energy consumption of IoT end devices impacts in deciding an optimal offloading mechanism. A model is developed to make a dynamic decision for offloading of tasks to edge and cloud or local execution by computing the expected time, energy consumption and processing capacity. This dynamic decision is proposed as processing capacity-based decision mechanism (PCDM) which takes the offloading decisions on new tasks by scheduling all the available devices based on processing capacity. The target devices are then selected for task execution with respect to energy consumption, task size and network time. PCDM is developed in the EDGECloudSim simulator for four different applications from various categories such as time sensitiveness, smaller in size and less energy consumption. The PCDM offloading methodology is experimented through simulations to compare with multi-criteria decision support mechanism for IoT offloading (MEDICI). Strategies based on task weightage termed as PCDM-AI, PCDM-SI, PCDM-AN, and PCDM-SN are developed and compared against the five baseline existing strategies namely IoT-P, Edge-P, Cloud-P, Random-P, and Probabilistic-P. These nine strategies are again developed using MEDICI with the same parameters of PCDM. Finally, all the approaches using PCDM and MEDICI are compared against each other for four different applications. From the simulation results, it is inferred that every application has unique approach performing better in terms of response time, total task execution, energy consumption of device, and total energy consumption of applications.  相似文献   

11.

The Internet of Things (IoT) has emerged as one of the most revolutionary technological innovations with the proliferation of applications within almost all fields of the human race. A cloud environment is the main component of IoT infrastructure to make IoT devices efficient, safe, reliable, usable, and autonomous. Reduction in infrastructure cost and demand accessibility of shared resources are essential parts of cloud-based IoT (CIoT) infrastructure. Information leakage in cloud-assisted IoT devices may invite dangerous activities and phenomena. Various cloud-based systems store IoT sensor data and later on access it accordingly. Some of them are public, and some of them are private. Private cloud services must be secured from external as well as internal adversaries. Hence, there must be a robust mechanism to prevent unauthorized access to devices. This paper proposes a novel and efficient protocol based on the Elliptic Curve property known as Elliptic Curve Discrete Logarithm Problem (ECDLP) with hash and XOR functions for the authentication in cloud-based IoT devices. In comparison to the existing protocols, the proposed protocol is resistant to attacks and other security vulnerabilities. The one-way hash function and XOR function effectively ensure a reduction in computation cost. AVISPA and BAN logic have been used for formal analysis of the proposed protocol. As per the performance analysis results, it is clear that the proposed protocol is efficiently suitable for cloud-assisted IoT devices.

  相似文献   

12.
Cloud computing is an emerging technology in which information technology resources are virtualized to users in a set of computing resources on a pay‐per‐use basis. It is seen as an effective infrastructure for high performance applications. Divisible load applications occur in many scientific and engineering applications. However, dividing an application and deploying it in a cloud computing environment face challenges to obtain an optimal performance due to the overheads introduced by the cloud virtualization and the supporting cloud middleware. Therefore, we provide results of series of extensive experiments in scheduling divisible load application in a Cloud environment to decrease the overall application execution time considering the cloud networking and computing capacities presented to the application's user. We experiment with real applications within the Amazon cloud computing environment. Our extensive experiments analyze the reasons of the discrepancies between a theoretical model and the reality and propose adequate solutions. These discrepancies are due to three factors: the network behavior, the application behavior and the cloud computing virtualization. Our results show that applying the algorithm result in a maximum ratio of 1.41 of the measured normalized makespan versus the ideal makespan for application in which the communication to computation ratio is big. They show that the algorithm is effective for those applications in a heterogeneous setting reaching a ratio of 1.28 for large data sets. For application following the ensemble clustering model in which the computation to communication ratio is big and variable, we obtained a maximum ratio of 4.7 for large data set and a ratio of 2.11 for small data set. Applying the algorithm also results in an important speedup. These results are revealing for the type of applications we consider under experiments. The experiments also reveal the impact of the choice of the platforms provided by Amazon on the performance of the applications under study. Considering the emergence of cloud computing for high performance applications, the results in this paper can be widely adopted by cloud computing developers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Emotion-aware computing represents an evolution in machine learning enabling systems and devices process to interpret emotional data to recognize human behavior changes. As emotion-aware smart systems evolve, there is an enormous potential for increasing the use of specialized devices that can anticipate life-threatening conditions facilitating an early response model for health complications. At the same time, applications developed for diagnostic and therapy services can support conditions recognition (as depression, for instance). Hence, this paper proposes an improved algorithm for emotion-aware smart systems, capable for predicting the risk of postpartum depression in women suffering from hypertensive disorders during pregnancy through biomedical and sociodemographic data analysis. Results show that ensemble classifiers represent a leading solution concerning predicting psychological disorders related to pregnancy. Merging novel technologies based on IoT, cloud computing, and big data analytics represent a considerable advance in monitoring complex diseases for emotion-aware computing, such as postpartum depression.  相似文献   

14.

Fog computing is considered a formidable next-generation complement to cloud computing. Nowadays, in light of the dramatic rise in the number of IoT devices, several problems have been raised in cloud architectures. By introducing fog computing as a mediate layer between the user devices and the cloud, one can extend cloud computing's processing and storage capability. Offloading can be utilized as a mechanism that transfers computations, data, and energy consumption from the resource-limited user devices to resource-rich fog/cloud layers to achieve an optimal experience in the quality of applications and improve the system performance. This paper provides a systematic and comprehensive study to evaluate fog offloading mechanisms' current and recent works. Each selected paper's pros and cons are explored and analyzed to state and address the present potentialities and issues of offloading mechanisms in a fog environment efficiently. We classify offloading mechanisms in a fog system into four groups, including computation-based, energy-based, storage-based, and hybrid approaches. Furthermore, this paper explores offloading metrics, applied algorithms, and evaluation methods related to the chosen offloading mechanisms in fog systems. Additionally, the open challenges and future trends derived from the reviewed studies are discussed.

  相似文献   

15.
Owing to massive technological developments in Internet of Things (IoT) and cloud environment, cloud computing (CC) offers a highly flexible heterogeneous resource pool over the network, and clients could exploit various resources on demand. Since IoT-enabled models are restricted to resources and require crisp response, minimum latency, and maximum bandwidth, which are outside the capabilities. CC was handled as a resource-rich solution to aforementioned challenge. As high delay reduces the performance of the IoT enabled cloud platform, efficient utilization of task scheduling (TS) reduces the energy usage of the cloud infrastructure and increases the income of service provider via minimizing processing time of user job. Therefore, this article concentration on the design of an oppositional red fox optimization based task scheduling scheme (ORFO-TSS) for IoT enabled cloud environment. The presented ORFO-TSS model resolves the problem of allocating resources from the IoT based cloud platform. It achieves the makespan by performing optimum TS procedures with various aspects of incoming task. The designing of ORFO-TSS method includes the idea of oppositional based learning (OBL) as to traditional RFO approach in enhancing their efficiency. A wide-ranging experimental analysis was applied on the CloudSim platform. The experimental outcome highlighted the efficacy of the ORFO-TSS technique over existing approaches.  相似文献   

16.
为了提升油田生产的自动化程度,保障安全,降低运行成本,扩大对环境恶劣和偏远地区的自动化覆盖范围,设计一种油田采输物联网监控系统。该系统以基于ARM Cortex-M0的LPC11C14FBD48为主控芯片,M35为GPRS通信模块,用C++语言开发内核软件和远程集中监控中心软件。可远程对油田采输现场的数据进行采集和系统控制,同时与云计算服务中心实现数据交互,为大数据处理提供基础。该系统实现了油田采输无人值守,覆盖偏远区域,降低了运行成本,提升了油田生产的安全性、可靠性和高效性,使油田生产综合效益明显提高。  相似文献   

17.
物联网环境下云数据存储安全及隐私保护策略研究   总被引:1,自引:0,他引:1  
物联网依托云计算强大的数据处理能力实现信息智能,而目前云计算对数据和服务的管理并不值得用户完全信赖。针对物联网环境下云数据安全性问题,在云计算中为了保证用户数据的准确性和隐私性,提出了一种物联网环境下云数据存储安全及隐私保护策略。实验结果表明该方案有效、灵活,且能抵御Byzantine失效、恶意修改数据甚至是服务器共谋攻击。  相似文献   

18.
云计算:系统实例与研究现状   总被引:188,自引:1,他引:187  
陈康  郑纬民 《软件学报》2009,20(5):1337-1348
针对云计算这样一个范畴综述了当前云计算所采用的技术,剖析其背后的技术含义以及当前云计算参与企业所采用的云计算实现方案.云计算包含两个方面的含义:一方面是底层构建的云计算平台基础设施,是用来构造上层应用程序的基础;另外一方面是构建在这个基础平台之上的云计算应用程序.主要是针对云计算的基础架构的研究与实现状况给出综述,对于云计算的应用也有所涉及.云计算有3个最基本的特征:第1个是基础设施架构在大规模的廉价服务器集群之上;第二是应用程序与底层服务协作开发,最大限度地利用资源;第3个是通过多个廉价服务器之间的冗余,通过软件获得高可用性.云计算达到了两个分布式计算的重要目标:可扩展性和高可用性.可扩展性表达了云计算能够无缝地扩展到大规模的集群之上,甚至包含数千个节点同时处理.高可用性代表了云计算能够容忍节点的错误,甚至有很大一部分节点发生失效也不会影响程序的正确运行.通过此文可以了解云计算的当前发展状况以及未来的研究趋势.  相似文献   

19.
雾计算将云计算的计算能力、数据分析应用等扩展到网络边缘,可满足物联网设备的低时延、移动性等要求,但同时也存在数据安全和隐私保护问题。传统云计算中的属性基加密技术不适用于雾环境中计算资源有限的物联网设备,并且难以管理属性变更。为此,提出一种支持加解密外包和撤销的属性基加密方案,构建“云-雾-终端”的三层系统模型,通过引入属性组密钥的技术,实现动态密钥更新,满足雾计算中属性即时撤销的要求。在此基础上,将终端设备中部分复杂的加解密运算外包给雾节点,以提高计算效率。实验结果表明,与KeyGen、Enc等方案相比,该方案具有更优的计算高效性和可靠性。  相似文献   

20.
Cloud computing offers massive scalability and elasticity required by many scientific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new opportunities for application developers. This paper investigates how workflow systems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号