首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
The traditional multi-access edge computing (MEC) capacity is overwhelmed by the increasing demand for vehicles, leading to acute degradation in task offloading performance. There is a tremendous number of resource-rich and idle mobile connected vehicles (CVs) in the traffic network, and vehicles are created as opportunistic ad-hoc edge clouds to alleviate the resource limitation of MEC by providing opportunistic computing services. On this basis, a novel scalable system framework is proposed in this paper for computation task offloading in opportunistic CV-assisted MEC. In this framework, opportunistic ad-hoc edge cloud and fixed edge cloud cooperate to form a novel hybrid cloud. Meanwhile, offloading decision and resource allocation of the user CVs must be ascertained. Furthermore, the joint offloading decision and resource allocation problem is described as a Mixed Integer Nonlinear Programming (MINLP) problem, which optimizes the task response latency of user CVs under various constraints. The original problem is decomposed into two subproblems. First, the Lagrange dual method is used to acquire the best resource allocation with the fixed offloading decision. Then, the satisfaction-driven method based on trial and error (TE) learning is adopted to optimize the offloading decision. Finally, a comprehensive series of experiments are conducted to demonstrate that our suggested scheme is more effective than other comparison schemes.  相似文献   

2.
Mobile edge cloud networks can be used to offload computationally intensive tasks from Internet of Things (IoT) devices to nearby mobile edge servers, thereby lowering energy consumption and response time for ground mobile users or IoT devices. Integration of Unmanned Aerial Vehicles (UAVs) and the mobile edge computing (MEC) server will significantly benefit small, battery-powered, and energy-constrained devices in 5G and future wireless networks. We address the problem of maximising computation efficiency in U-MEC networks by optimising the user association and offloading indicator (OI), the computational capacity (CC), the power consumption, the time duration, and the optimal location planning simultaneously. It is possible to assign some heavy tasks to the UAV for faster processing and small ones to the mobile users (MUs) locally. This paper utilizes the k-means clustering algorithm, the interior point method, and the conjugate gradient method to iteratively solve the non-convex multi-objective resource allocation problem. According to simulation results, both local and offloading schemes give optimal solution.  相似文献   

3.
To reduce the transmission latency and mitigate the backhaul burden of the centralized cloud-based network services, the mobile edge computing (MEC) has been drawing increased attention from both industry and academia recently. This paper focuses on mobile users’ computation offloading problem in wireless cellular networks with mobile edge computing for the purpose of optimizing the computation offloading decision making policy. Since wireless network states and computing requests have stochastic properties and the environment’s dynamics are unknown, we use the model-free reinforcement learning (RL) framework to formulate and tackle the computation offloading problem. Each mobile user learns through interactions with the environment and the estimate of its performance in the form of value function, then it chooses the overhead-aware optimal computation offloading action (local computing or edge computing) based on its state. The state spaces are high-dimensional in our work and value function is unrealistic to estimate. Consequently, we use deep reinforcement learning algorithm, which combines RL method Q-learning with the deep neural network (DNN) to approximate the value functions for complicated control applications, and the optimal policy will be obtained when the value function reaches convergence. Simulation results showed that the effectiveness of the proposed method in comparison with baseline methods in terms of total overheads of all mobile users.  相似文献   

4.
The number of mobile devices accessing wireless networks is skyrocketing due to the rapid advancement of sensors and wireless communication technology. In the upcoming years, it is anticipated that mobile data traffic would rise even more. The development of a new cellular network paradigm is being driven by the Internet of Things, smart homes, and more sophisticated applications with greater data rates and latency requirements. Resources are being used up quickly due to the steady growth of smartphone devices and multimedia apps. Computation offloading to either several distant clouds or close mobile devices has consistently improved the performance of mobile devices. The computation latency can also be decreased by offloading computing duties to edge servers with a specific level of computing power. Device-to-device (D2D) collaboration can assist in processing small-scale activities that are time-sensitive in order to further reduce task delays. The task offloading performance is drastically reduced due to the variation of different performance capabilities of edge nodes. Therefore, this paper addressed this problem and proposed a new method for D2D communication. In this method, the time delay is reduced by enabling the edge nodes to exchange data samples. Simulation results show that the proposed algorithm has better performance than traditional algorithm.  相似文献   

5.
With the rapid development of artificial intelligence, face recognition systems are widely used in daily lives. Face recognition applications often need to process large amounts of image data. Maintaining the accuracy and low latency is critical to face recognition systems. After analyzing the two-tier architecture “client-cloud” face recognition systems, it is found that these systems have high latency and network congestion when massive recognition requirements are needed to be responded, and it is very inconvenient and inefficient to deploy and manage relevant applications on the edge of the network. This paper proposes a flexible and efficient edge computing accelerated architecture. By offloading part of the computing tasks to the edge server closer to the data source, edge computing resources are used for image preprocessing to reduce the number of images to be transmitted, thus reducing the network transmission overhead. Moreover, the application code does not need to be rewritten and can be easily migrated to the edge server. We evaluate our schemes based on the open source Azure IoT Edge, and the experimental results show that the three-tier architecture “Client-Edge-Cloud” face recognition system outperforms the state-of-art face recognition systems in reducing the average response time.  相似文献   

6.
Internet of Things (IoT) defines a network of devices connected to the internet and sharing a massive amount of data between each other and a central location. These IoT devices are connected to a network therefore prone to attacks. Various management tasks and network operations such as security, intrusion detection, Quality-of-Service provisioning, performance monitoring, resource provisioning, and traffic engineering require traffic classification. Due to the ineffectiveness of traditional classification schemes, such as port-based and payload-based methods, researchers proposed machine learning-based traffic classification systems based on shallow neural networks. Furthermore, machine learning-based models incline to misclassify internet traffic due to improper feature selection. In this research, an efficient multilayer deep learning based classification system is presented to overcome these challenges that can classify internet traffic. To examine the performance of the proposed technique, Moore-dataset is used for training the classifier. The proposed scheme takes the pre-processed data and extracts the flow features using a deep neural network (DNN). In particular, the maximum entropy classifier is used to classify the internet traffic. The experimental results show that the proposed hybrid deep learning algorithm is effective and achieved high accuracy for internet traffic classification, i.e., 99.23%. Furthermore, the proposed algorithm achieved the highest accuracy compared to the support vector machine (SVM) based classification technique and k-nearest neighbours (KNNs) based classification technique.  相似文献   

7.
Internet of Things (IoT) technology is rapidly evolving, but there is no trusted platform to protect user privacy, protect information between different IoT domains, and promote edge processing. Therefore, we integrate the blockchain technology into constructing trusted IoT platforms. However, the application of blockchain in IoT is hampered by the challenges posed by heavy computing processes. To solve the problem, we put forward a blockchain framework based on mobile edge computing, in which the blockchain mining tasks can be offloaded to nearby nodes or the edge computing service providers and the encrypted hashes of blocks can be cached in the edge computing service providers. Moreover, we model the process of offloading and caching to ensure that both edge nodes and edge computing service providers obtain the maximum profit based on game theory and auction theory. Finally, the proposed mechanism is compared with the centralized mode, mode A (all the miners offload their tasks to the edge computing service providers), and mode B (all the miners offload their tasks to a group of neighbor devices). Simulation results show that under our mechanism, mining networks obtain more profits and consume less time on average.  相似文献   

8.
Attacks on websites and network servers are among the most critical threats in network security. Network behavior identification is one of the most effective ways to identify malicious network intrusions. Analyzing abnormal network traffic patterns and traffic classification based on labeled network traffic data are among the most effective approaches for network behavior identification. Traditional methods for network traffic classification utilize algorithms such as Naive Bayes, Decision Tree and XGBoost. However, network traffic classification, which is required for network behavior identification, generally suffers from the problem of low accuracy even with the recently proposed deep learning models. To improve network traffic classification accuracy thus improving network intrusion detection rate, this paper proposes a new network traffic classification model, called ArcMargin, which incorporates metric learning into a convolutional neural network (CNN) to make the CNN model more discriminative. ArcMargin maps network traffic samples from the same category more closely while samples from different categories are mapped as far apart as possible. The metric learning regularization feature is called additive angular margin loss, and it is embedded in the object function of traditional CNN models. The proposed ArcMargin model is validated with three datasets and is compared with several other related algorithms. According to a set of classification indicators, the ArcMargin model is proofed to have better performances in both network traffic classification tasks and open-set tasks. Moreover, in open-set tasks, the ArcMargin model can cluster unknown data classes that do not exist in the previous training dataset.  相似文献   

9.
罗东华  余志  李熙莹  陈锐祥  张辉 《光电工程》2007,34(11):70-73,77
针对视频车流量检测容易受车辆阴影和车辆变道影响的问题,笔者提出了一种基于边缘信息的背景差车流量检测方法.该方法利用边缘信息作为车辆的检测特征,实时自动提取和更新背景边缘并采用动态开窗的方式来进行车辆计数.实验结果表明,与传统背景差法相比,该方法受车辆阴影和车辆变道影响较小,检测准确率达97.3%,是一种实用有效的车流量检测方法.  相似文献   

10.
In today’s world, smart phones offer various applications namely face detection, augmented-reality, image and video processing, video gaming and speech recognition. With the increasing demand for computing resources, these applications become more complicated. Cloud Computing (CC) environment provides access to unlimited resource pool with several features, including on demand self-service, elasticity, wide network access, resource pooling, low cost, and ease of use. Mobile Cloud Computing (MCC) aimed at overcoming drawbacks of smart phone devices. The task remains in combining CC technology to the mobile devices with improved battery life and therefore resulting in significant performance. For remote execution, recent studies suggested downloading all or part of mobile application from mobile device. On the other hand, in offloading process, mobile device energy consumption, Central Processing Unit (CPU) utilization, execution time, remaining battery life and amount of data transmission in network were related to one or more constraints by frameworks designed. To address the issues, a Heuristic and Bent Key Exchange (H-BKE) method can be considered by both ways to optimize energy consumption as well as to improve security during offloading. First, an energy efficient offloading model is designed using Reactive Heuristic Offloading algorithm where, the secondary users are allocated with the unused primary users’ spectrum. Next, a novel AES algorithm is designed that uses a Bent function and Rijndael variant with the advantage of large block size is hard to interpret and hence is said to ensure security while accessing primary users’ unused spectrum by the secondary user. Simulations are conducted for efficient offloading in mobile cloud and performance valuations are carried on the way to demonstrate that our projected technique is successful in terms of time consumption, energy consumption along with the security aspects covered during offloading in MCC.  相似文献   

11.
Statistical process control (SPC) is one of the most effective tools of total quality management, the main function of which is to monitor and minimize process variations. Typically, SPC applications involve three major tasks in sequence: (1) monitoring the process, (2) diagnosing the deviated process and (3) taking corrective action. With the movement towards a computer integrated manufacturing environment, computer based applications need to be developed to implement the various SPC tasks automatically. However, the pertinent literature shows that nearly all the researches in this field have only focussed on the automation of monitoring the process. The remaining two tasks still need to be carried out by quality practitioners. This project aims to apply a hybrid artificial intelligence technique in building a real time SPC system, in which an artificial neural network based control chart monitoring sub‐system and an expert system based control chart alarm interpretation sub‐system are integrated for automatically implementing the SPC tasks comprehensively. This system was designed to provide the quality practitioner with three kinds of information related to the current status of the process: (1) status of the process (in‐control or out‐of‐control). If out‐of‐control, an alarm will be signaled, (2) plausible causes for the out‐of‐control situation and (3) effective actions against the out‐of‐control situation. An example is provided to demonstrate that hybrid intelligence can be usefully applied for solving the problems in a real time SPC system. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

12.
Software-defined networking (SDN) algorithms are gaining increasing interest and are making networks flexible and agile. The basic idea of SDN is to move the control planes to more than one server’s named controllers and limit the data planes to numerous sending network components, enabling flexible and dynamic network management. A distinctive characteristic of SDN is that it can logically centralize the control plane by utilizing many physical controllers. The deployment of the controller—that is, the controller placement problem (CPP)—becomes a vital model challenge. Through the advancements of blockchain technology, data integrity between nodes can be enhanced with no requirement for a trusted third party. Using the latest developments in blockchain technology, this article designs a novel sea turtle foraging optimization algorithm for the controller placement problem (STFOA-CPP) with blockchain-based intrusion detection in an SDN environment. The major intention of the STFOA-CPP technique is the maximization of lifetime, network connectivity, and load balancing with the minimization of latency. In addition, the STFOA-CPP technique is based on the sea turtles’ food-searching characteristics of tracking the odour path of dimethyl sulphide (DMS) released from food sources. Moreover, the presented STFOA-CPP technique can adapt with the controller’s count mandated and the shift to controller mapping to variable network traffic. Finally, the blockchain can inspect the data integrity, determine significantly malicious input, and improve the robust nature of developing a trust relationship between several nodes in the SDN. To demonstrate the improved performance of the STFOA-CPP algorithm, a wide-ranging experimental analysis was carried out. The extensive comparison study highlighted the improved outcomes of the STFOA-CPP technique over other recent approaches.  相似文献   

13.
With the growing amounts of multi-micro grids, electric vehicles, smart home, smart cities connected to the Power Distribution Internet of Things (PD-IoT) system, greater computing resource and communication bandwidth are required for power distribution. It probably leads to extreme service delay and data congestion when a large number of data and business occur in emergence. This paper presents a service scheduling method based on edge computing to balance the business load of PD-IoT. The architecture, components and functional requirements of the PD-IoT with edge computing platform are proposed. Then, the structure of the service scheduling system is presented. Further, a novel load balancing strategy and ant colony algorithm are investigated in the service scheduling method. The validity of the method is evaluated by simulation tests. Results indicate that the mean load balancing ratio is reduced by 99.16% and the optimized offloading links can be acquired within 1.8 iterations. Computing load of the nodes in edge computing platform can be effectively balanced through the service scheduling.  相似文献   

14.
Security is a vital parameter to conserve energy in wireless sensor networks (WSN). Trust management in the WSN is a crucial process as trust is utilized when collaboration is important for accomplishing trustworthy data transmission. But the available routing techniques do not involve security in the design of routing techniques. This study develops a novel statistical analysis with dingo optimizer enabled reliable routing scheme (SADO-RRS) for WSN. The proposed SADO-RRS technique aims to detect the existence of attacks and optimal routes in WSN. In addition, the presented SADO-RRS technique derives a new statistics based linear discriminant analysis (LDA) for attack detection, Moreover, a trust based dingo optimizer (TBDO) algorithm is applied for optimal route selection in the WSN and accomplishes secure data transmission in WSN. Besides, the TBDO algorithm involves the derivation of the fitness function involving different input variables of WSN. For demonstrating the enhanced outcomes of the SADO-RRS technique, a wide range of simulations was carried out and the outcomes demonstrated the enhanced outcomes of the SADO-RRS technique.  相似文献   

15.
With the recent developments in the Internet of Things (IoT), the amount of data collected has expanded tremendously, resulting in a higher demand for data storage, computational capacity, and real-time processing capabilities. Cloud computing has traditionally played an important role in establishing IoT. However, fog computing has recently emerged as a new field complementing cloud computing due to its enhanced mobility, location awareness, heterogeneity, scalability, low latency, and geographic distribution. However, IoT networks are vulnerable to unwanted assaults because of their open and shared nature. As a result, various fog computing-based security models that protect IoT networks have been developed. A distributed architecture based on an intrusion detection system (IDS) ensures that a dynamic, scalable IoT environment with the ability to disperse centralized tasks to local fog nodes and which successfully detects advanced malicious threats is available. In this study, we examined the time-related aspects of network traffic data. We presented an intrusion detection model based on a two-layered bidirectional long short-term memory (Bi-LSTM) with an attention mechanism for traffic data classification verified on the UNSW-NB15 benchmark dataset. We showed that the suggested model outperformed numerous leading-edge Network IDS that used machine learning models in terms of accuracy, precision, recall and F1 score.  相似文献   

16.
Machine learning (ML) becomes a familiar topic among decision makers in several domains, particularly healthcare. Effective design of ML models assists to detect and classify the occurrence of diseases using healthcare data. Besides, the parameter tuning of the ML models is also essential to accomplish effective classification results. This article develops a novel red colobuses monkey optimization with kernel extreme learning machine (RCMO-KELM) technique for epileptic seizure detection and classification. The proposed RCMO-KELM technique initially extracts the chaotic, time, and frequency domain features in the actual EEG signals. In addition, the min-max normalization approach is employed for the pre-processing of the EEG signals. Moreover, KELM model is used for the detection and classification of epileptic seizures utilizing EEG signal. Furthermore, the RCMO technique was utilized for the optimal parameter tuning of the KELM technique in such a way that the overall detection outcomes can be considerably enhanced. The experimental result analysis of the RCMO-KELM technique has been examined using benchmark dataset and the results are inspected under several aspects. The comparative result analysis reported the better outcomes of the RCMO-KELM technique over the recent approaches with the of 0.956.  相似文献   

17.
The congestion dependence relationship among links using microsimulation is explored, based on data from a real road network. The work is motivated by recent innovations to improve the reliability of Dynamic Route Guidance (DRG) systems. The reliability of DRG systems can be significantly enhanced by adding a function to predict the congestion in the road network. The application of spatial econometrics modelling to congestion prediction is also explored, by using historical traffic message channel (TMC) data stored in the vehicle navigation unit. The nature of TMC data is in the form of a time series of geo-referenced congestion warning messages, which is generally collected from various traffic sources. The prediction of future congestion could be based on the previous year of TMC data. Synthetic TMC data generated by microscopic traffic simulation for the network of Coventry are used in this study. The feasibility of using spatial econometrics modelling techniques to predict congestion is explored. The results are presented at the end.  相似文献   

18.
With the rapid development of mobile communication technology, the application of internet of vehicles (IoV) services, such as for information services, driving safety, and traffic efficiency, is growing constantly. For businesses with low transmission delay, high data processing capacity and large storage capacity, by deploying edge computing in the IoV, data processing, encryption and decision-making can be completed at the local end, thus providing real-time and highly reliable communication capability. The roadside unit (RSU), as an important part of edge computing in the IoV, fulfils an important data forwarding function and provides an interactive communication channel for vehicles and server providers. Additional computing resources can be configured to accommodate the computing requirements of users. In this study, a virtual traffic defense strategy based on a differential game is proposed to solve the security problem of user-sensitive information leakage when an RSU is attacked. An incentive mechanism encourages service vehicles within the hot range to send virtual traffic to another RSU. By attracting the attention of attackers, it covers the target RSU and protects the system from attack. Simulation results show that the scheme provides the optimal strategy for intelligent vehicles to transmit virtual data, and ensures the maximization of users’ interests.  相似文献   

19.
OBJECTIVE: To estimate the reduction in traffic mortality in the United States that would result from an automatic crash notification (ACN) system. METHODS: 1997 Fatality Analysis Reporting System (FARS) data from 30,875 cases of incapacitating or fatal injury with complete information on emergency medical services (EMS) notification and arrival times were analyzed considering cases at any time to be in one of four states: (1) alive prior to notification; (2) alive after notification; (3) alive after EMS arrival; and (4) dead. For each minute after the crash, transition probabilities were calculated for each possible change of state. These data were used to construct models with (1) number of incapacitating injuries ranging from FARS cases up to an estimated total for the US in 1997; (2) deaths equal to FARS total; (3) transitions to death from other states proportional to FARS totals and rates and (4) other state transitions equal to FARS rates. The outcomes from these models were compared to outcomes from otherwise identical models in which all notification times were set to 1 min. RESULTS: FARS data estimated 12,823 deaths prior to notification, 1800 after notification, and 14,015 between EMS arrival and 6 h. If notification times were all set to 1 min, a model using FARS data only predicted 10,703 deaths prior to notification, 2,306 after notification, and 15,208 after EMS arrival, while a model using an estimated total number of incapacitating injuries for the US predicted 9,569 deaths prior to notification, 2,261 after notification, and 15,134 after arrival. In the first model, overall mortality was reduced from 28,638 to 28,217 (421 per year. or 1.5%), while in the second model mortality was reduced to 26,964 (1,674 per year, or 6%). CONCLUSIONS: Modest but important reduction in traffic mortality should be expected from a fully functional ACN system. Imperfect systems would be less effective.  相似文献   

20.
In edge computing, a reasonable edge resource bidding mechanism can enable edge providers and users to obtain benefits in a relatively fair fashion. To maximize such benefits, this paper proposes a dynamic multi-attribute resource bidding mechanism (DMRBM). Most of the previous work mainly relies on a third-party agent to exchange information to gain optimal benefits. It is worth noting that when edge providers and users trade with third-party agents which are not entirely reliable and trustworthy, their sensitive information is prone to be leaked. Moreover, the privacy protection of edge providers and users must be considered in the dynamic pricing/transaction process, which is also very challenging. Therefore, this paper first adopts a privacy protection algorithm to prevent sensitive information from leakage. On the premise that the sensitive data of both edge providers and users are protected, the prices of providers fluctuate within a certain range. Then, users can choose appropriate edge providers by the price-performance ratio (PPR) standard and the reward of lower price (LPR) standard according to their demands. The two standards can be evolved by two evaluation functions. Furthermore, this paper employs an approximate computing method to get an approximate solution of DMRBM in polynomial time. Specifically, this paper models the bidding process as a non-cooperative game and obtains the approximate optimal solution based on two standards according to the game theory. Through the extensive experiments, this paper demonstrates that the DMRBM satisfies the individual rationality, budget balance, and privacy protection and it can also increase the task offloading rate and the system benefits.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号