首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Deep learning has attracted a lot of attention and has been applied successfully in many areas such as bioinformatics, imaging processing, game playing and computer security etc. On the other hand, deep learning usually requires a lot of training data which may not be provided by a sole owner. As the volume of data gets huge, it is common for users to store their data in a third-party cloud. Due to the confidentiality of the data, data are usually stored in encrypted form. To apply deep learning to these datasets owned by multiple data owners on cloud, we need to tackle two challenges: (i) the data are encrypted with different keys, all operations including intermediate results must be secure; and (ii) the computational cost and the communication cost of the data owner(s) should be kept minimal. In our work, we propose two schemes to solve the above problems. We first present a basic scheme based on multi-key fully homomorphic encryption (MK-FHE), then we propose an advanced scheme based on a hybrid structure by combining the double decryption mechanism and fully homomorphic encryption (FHE). We also prove that these two multi-key privacy-preserving deep learning schemes over encrypted data are secure.  相似文献   

2.
本文提出了一种云环境下的网络安全处理模型,模型中的每台云服务器都拥有自己的入侵检测系统,并且所有的服务器共享一个异常管理平台,该平台负责报警信息的接收、处理和日志管理.模型采用报警级别动态调整技术和攻击信息共享方法,最大限度地降低了漏报率和服务器遭受同种攻击的可能性,有效提高了检测效率和系统安全水平.  相似文献   

3.
苏志达  祝跃飞  刘龙 《计算机应用》2017,37(6):1650-1656
针对传统安卓恶意程序检测技术检测准确率低,对采用了重打包和代码混淆等技术的安卓恶意程序无法成功识别等问题,设计并实现了DeepDroid算法。首先,提取安卓应用程序的静态特征和动态特征,结合静态特征和动态特征生成应用程序的特征向量;然后,使用深度学习算法中的深度置信网络(DBN)对收集到的训练集进行训练,生成深度学习网络;最后,利用生成的深度学习网络对待测安卓应用程序进行检测。实验结果表明,在使用相同测试集的情况下,DeepDroid算法的正确率比支持向量机(SVM)算法高出3.96个百分点,比朴素贝叶斯(Naive Bayes)算法高出12.16个百分点,比K最邻近(KNN)算法高出13.62个百分点。DeepDroid算法结合了安卓应用程序的静态特征和动态特征,采用了动态检测和静态检测相结合的检测方法,弥补了静态检测代码覆盖率不足和动态检测误报率高的缺点,在特征识别的部分采用DBN算法使得网络训练速度得到保证的同时还有很高的检测正确率。  相似文献   

4.
Virtual machines (VM) offer simple and practical mechanisms to address many of the manageability problems of leveraging heterogeneous computing resources. VM live migration is an important feature of virtualization in cloud computing: it allows administrators to transparently tune the performance of the computing infrastructure. However, VM live migration may open the door to security threats. Classic anomaly detection schemes such as Local Outlier Factors (LOF) fail in detecting anomalies in the process of VM live migration. To tackle such critical security issues, we propose an adaptive scheme that mines data from the cloud infrastructure in order to detect abnormal statistics when VMs are migrated to new hosts. In our scheme, we extend classic Local Outlier Factors (LOF) approach by defining novel dimension reasoning (DR) rules as DR-LOF to figure out the possible sources of anomalies. We also incorporate Symbolic Aggregate ApproXimation (SAX) to enable timing information exploration that LOF ignores. In addition, we implement our scheme with an adaptive procedure to reduce chances of performance instability. Compared with LOF that fails in detecting anomalies in the process of VM live migration, our scheme is able not only to detect anomalies but also to identify their possible sources, giving cloud computing operators important clues to pinpoint and clear the anomalies. Our scheme further outperforms other classic clustering tools in WEKA (Waikato Environment for Knowledge Analysis) with higher detection rates and lower false alarm rate. Our scheme would serve as a novel anomaly detection tool to improve security framework in VM management for cloud computing.  相似文献   

5.
This paper reports the application of deep learning for implementing the anomaly detection of defects on concrete structures, so as to facilitate the visual inspection of civil infrastructure. A convolutional autoencoder was trained as a reconstruction-based model, with the defect-free images, to rapidly and reliably detect defects from the large volume of image datasets. This training process was in the unsupervised mode, with no label needed, thereby requiring no prior knowledge and saving an enormous amount of time for label preparation. The built anomaly detector favors minimizing the reconstruction errors of defect-free images, which renders high reconstruction errors of defects, in turn, detecting the location of defects. The assessment shows that the proposed anomaly detection technique is robust and adaptable to defects on wide ranges of scales. Comparison was also made with the segmentation results produced by other automatic classical methods, revealing that the results made by the anomaly map outperform other segmentation methods, in terms of precision, recall, F1 measure and F2 measure, without severe under- and over-segmentation. Further, instead of merely being a binary map, each pixel of the anomaly map is represented by the anomaly score, which acts as a risk indicator for alerting inspectors, wherever defects on concrete structures are detected.  相似文献   

6.
In recent times, the machine learning (ML) community has recognized the deep learning (DL) computing model as the Gold Standard. DL has gradually become the most widely used computational approach in the field of machine learning, achieving remarkable results in various complex cognitive tasks that are comparable to, or even surpassing human performance. One of the key benefits of DL is its ability to learn from vast amounts of data. In recent years, the DL field has witnessed rapid expansion and has found successful applications in various conventional areas. Significantly, DL has outperformed established ML techniques in multiple domains, such as cloud computing, robotics, cybersecurity, and several others. Nowadays, cloud computing has become crucial owing to the constant growth of the IoT network. It remains the finest approach for putting sophisticated computational applications into use, stressing the huge data processing. Nevertheless, the cloud falls short because of the crucial limitations of cutting-edge IoT applications that produce enormous amounts of data and necessitate a quick reaction time with increased privacy. The latest trend is to adopt a decentralized distributed architecture and transfer processing and storage resources to the network edge. This eliminates the bottleneck of cloud computing as it places data processing and analytics closer to the consumer. Machine learning (ML) is being increasingly utilized at the network edge to strengthen computer programs, specifically by reducing latency and energy consumption while enhancing resource management and security. To achieve optimal outcomes in terms of efficiency, space, reliability, and safety with minimal power usage, intensive research is needed to develop and apply machine learning algorithms. This comprehensive examination of prevalent computing paradigms underscores recent advancements resulting from the integration of machine learning and emerging computing models, while also addressing the underlying open research issues along with potential future directions. Because it is thought to open up new opportunities for both interdisciplinary research and commercial applications, we present a thorough assessment of the most recent works involving the convergence of deep learning with various computing paradigms, including cloud, fog, edge, and IoT, in this contribution. We also draw attention to the main issues and possible future lines of research. We hope this survey will spur additional study and contributions in this exciting area.  相似文献   

7.
移动边缘计算是解决机器人大计算量任务需求的一种方法。传统算法基于智能算法或凸优化方法,迭代时间长。深度强化学习通过一次前向传递即可求解,但只针对固定数量机器人进行求解。通过对深度强化学习分析研究,在深度强化学习神经网络中输入层前进行输入规整,在输出层后添加卷积层,使得网络能够自适应满足动态移动机器人数量的卸载需求。最后通过仿真实验验证,与自适应遗传算法和强化学习进行对比,验证了所提出算法的有效性及可行性。  相似文献   

8.
In this paper, an unsupervised learning-based approach is presented for fusing bracketed exposures into high-quality images that avoids the need for interim conversion to intermediate high dynamic range (HDR) images. As an objective quality measure – the colored multi-exposure fusion structural similarity index measure (MEF-SSIMc) – is optimized to update the network parameters, the unsupervised learning can be realized without using any ground truth (GT) images. Furthermore, an unreferenced gradient fidelity term is added in the loss function to recover and supplement the image information for the fused image. As shown in the experiments, the proposed algorithm performs well in terms of structure, texture, and color. In particular, it maintains the order of variations in the original image brightness and suppresses edge blurring and halo effects, and it also produces good visual effects that have good quantitative evaluation indicators. Our code will be publicly available at https://github.com/cathying-cq/UMEF.  相似文献   

9.
10.
郭晓东  郝思达  王丽芳 《计算机应用研究》2023,40(9):2803-2807+2814
车辆边缘计算允许车辆将计算任务卸载到边缘服务器,从而满足车辆爆炸式增长的计算资源需求。但是如何进行卸载决策与计算资源分配仍然是亟待解决的关键问题。并且,运动车辆在连续时间内进行任务卸载很少被提及,尤其对车辆任务到达随机性考虑不足。针对上述问题,建立动态车辆边缘计算模型,描述为7状态2动作空间的Markov决策过程,并建立一个分布式深度强化学习模型来解决问题。另外,针对离散—连续混合决策问题导致的效果欠佳,将输入层与一阶决策网络嵌套,提出一种分阶决策的深度强化学习算法。仿真结果表明,所提算法相较于对比算法,在能耗上保持了较低水平,并且在任务完成率、时延和奖励方面都具备明显优势,这为车辆边缘计算中的卸载决策与计算资源分配问题提供了一种有效的解决方案。  相似文献   

11.
Cloud computing is emerging as an increasingly important service-oriented computing paradigm. Management is a key to providing accurate service availability and performance data, as well as enabling real-time provisioning that automatically provides the capacity needed to meet service demands. In this paper, we present a unified reinforcement learning approach, namely URL, to automate the configuration processes of virtualized machines and appliances running in the virtual machines. The approach lends itself to the application of real-time autoconfiguration of clouds. It also makes it possible to adapt the VM resource budget and appliance parameter settings to the cloud dynamics and the changing workload to provide service quality assurance. In particular, the approach has the flexibility to make a good trade-off between system-wide utilization objectives and appliance-specific SLA optimization goals. Experimental results on Xen VMs with various workloads demonstrate the effectiveness of the approach. It can drive the system into an optimal or near-optimal configuration setting in a few trial-and-error iterations.  相似文献   

12.
目标检测的任务是从图像中精确且高效地识别、定位出大量预定义类别的物体实例。随着深度学习的广泛应用,目标检测的精确度和效率都得到了较大提升,但基于深度学习的目标检测仍面临改进与优化主流目标检测算法的性能、提高小目标物体检测精度、实现多类别物体检测、轻量化检测模型等关键技术的挑战。针对上述挑战,本文在广泛文献调研的基础上,从双阶段、单阶段目标检测算法的改进与结合的角度分析了改进与优化主流目标检测算法的方法,从骨干网络、增加视觉感受野、特征融合、级联卷积神经网络和模型的训练方式的角度分析了提升小目标检测精度的方法,从训练方式和网络结构的角度分析了用于多类别物体检测的方法,从网络结构的角度分析了用于轻量化检测模型的方法。此外,对目标检测的通用数据集进行了详细介绍,从4个方面对该领域代表性算法的性能表现进行了对比分析,对目标检测中待解决的问题与未来研究方向做出预测和展望。目标检测研究是计算机视觉和模式识别中备受青睐的热点,仍然有更多高精度和高效的算法相继提出,未来将朝着更多的研究方向发展。  相似文献   

13.
The volunteer computing paradigm, along with the tailored use of peer-to-peer communication, has recently proven capable of solving a wide area of data-intensive problems in a distributed scenario. The Mining@Home framework is based on these paradigms and it has been implemented to run a wide range of distributed data mining applications. The efficiency and scalability of the architecture can be fully exploited when the overall task can be partitioned into distinct jobs that may be executed in parallel, and input data can be reused, which naturally leads to the use of data cachers. This paper explores the opportunities offered by Mining@Home for coping with the discovery of classifiers through the use of the bagging approach: multiple learners are used to compute models from the same input data, so as to extract a final model with high statistical accuracy. Analysis focuses on the evaluation of experiments performed in a real distributed environment, enriched with simulation assessment–to evaluate very large environments–and with an analytical investigation based on the iso-efficiency methodology. An extensive set of experiments allowed to analyze a number of heterogeneous scenarios, with different problem sizes, which helps to improve the performance by appropriately tuning the number of workers and the number of interconnected domains.  相似文献   

14.
15.
遥感图像分析在国土资源管理、海洋监测等领域有着极为广阔的应用前景。深度学习技术已在图像处理领域取得突破性进展,然而,遥感图像固有的尺寸大、目标小而密集等特点,使得将面向普通图像的深度学习方法用于遥感目标检测普遍存在定位不准确、小目标检测难、大图检测精度差等问题。针对上述难题, 提出了一种新型遥感图像目标检测算法DFS。与传统机器学习方法相比,DFS 设计了新的维度聚类模块、定制损失函数和滑动窗口分割检测机制。其中,维度聚类模块通过设计聚类机制优化定制先验框,提高定位精度;定制损失函数提高对船只等小目标的检测精度;滑动窗口分割检测解决大图检测精度低的问题。在经典遥感数据集上开展的实验对比表明,与YOLOv2相比,DFS算法的mAP提高了256%,小目标检测效率及大图检测效能大幅提高。  相似文献   

16.
陈杰  张挺  杜奕 《计算机应用》2020,40(4):1231-1236
目前用于多孔介质重构的多点统计法(MPS)等传统方法需要多次扫描训练图像,然后进行后续复杂的概率计算得到模拟结果,导致重构效率较低,模拟过程复杂,因此提出一种基于自适应深度迁移学习的重构方法。首先利用深度神经网络从多孔介质的训练图像中提取复杂特征,然后在深度迁移学习中添加自适应层以减少训练数据和预测数据之间的数据分布差异,最后使用自适应迁移学习复制这些特征来获得与真实训练数据结构相似的重构结果。通过与典型的多孔介质重构方法MPS的比较实验,结果显示在多点连通曲线、变差函数曲线和孔隙度方面,该方法重构质量更好,平均重构耗时从840 s减少到166 s,平均CPU占用率从98%下降到20%,平均内存占用下降了69%。所提方法在保证重构结果质量更好的前提下,显著提高了多孔介质重构的效率。  相似文献   

17.
In many domains, the previous decade was characterized by increasing data volumes and growing complexity of data analyses, creating new demands for batch processing on distributed systems. Effective operation of these systems is challenging when facing uncertainties about the performance of jobs and tasks under varying resource configurations, e. g., for scheduling and resource allocation. We survey predictive performance modeling (PPM) approaches to estimate performance metrics such as execution duration, required memory or wait times of future jobs and tasks based on past performance observations. We focus on non-intrusive methods, i. e., methods that can be applied to any workload without modification, since the workload is usually a black box from the perspective of the systems managing the computational infrastructure. We classify and compare sources of performance variation, predicted performance metrics, limitations and challenges, required training data, use cases, and the underlying prediction techniques. We conclude by identifying several open problems and pressing research needs in the field.  相似文献   

18.
目标检测是计算机视觉研究领域的核心问题和最具挑战性的问题之一,随着深度学习技术的广泛应用,目标检测的效率和精度逐渐提升,在某些方面已达到甚至超过人眼的分辨水平.但是,由于小目标在图像中覆盖面积小、分辨率低和特征不明显等原因,现有的目标检测方法对小目标的检测效果都不理想,因此也诞生了很多专门针对提升小目标检测效果的方法....  相似文献   

19.
Direct word discovery from audio speech signals is a very difficult and challenging problem for a developmental robot. Human infants are able to discover words directly from speech signals, and, to understand human infants’ developmental capability using a constructive approach, it is very important to build a machine learning system that can acquire knowledge about words and phonemes, i.e. a language model and an acoustic model, autonomously in an unsupervised manner. To achieve this, the nonparametric Bayesian double articulation analyzer (NPB-DAA) with the deep sparse autoencoder (DSAE) is proposed in this paper. The NPB-DAA has been proposed to achieve totally unsupervised direct word discovery from speech signals. However, the performance was still unsatisfactory, although it outperformed pre-existing unsupervised learning methods. In this paper, we integrate the NPB-DAA with the DSAE, which is a neural network model that can be trained in an unsupervised manner, and demonstrate its performance through an experiment about direct word discovery from auditory speech signals. The experiment shows that the combined method, the NPB-DAA with the DSAE, outperforms pre-existing unsupervised learning methods, and shows state-of-the-art performance. It is also shown that the proposed method outperforms several standard speech recognizer-based methods with true word dictionaries.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号