首页 | 官方网站   微博 | 高级检索  
     

自动化张量分解加速卷积神经网络
引用本文:宋冰冰,张浩,吴子锋,刘俊晖,梁宇,周维. 自动化张量分解加速卷积神经网络[J]. 软件学报, 2021, 32(11): 3468-3481
作者姓名:宋冰冰  张浩  吴子锋  刘俊晖  梁宇  周维
作者单位:云南大学信息学院,云南昆明 650504;云南大学软件学院,云南昆明 650504;云南大学跨境网络空间安全工程研究中心,云南昆明 650504
基金项目:国家自然科学基金(61762089,61863036,61663047)
摘    要:近年来,卷积神经网络(CNN)展现了强大的性能,被广泛应用到了众多领域.由于CNN参数数量庞大,且存储和计算能力需求高,其难以部署在资源受限设备上.因此,对CNN的压缩和加速成为一个迫切需要解决的问题.随着自动化机器学习(AutoML)的研究与发展,AutoML对神经网络发展产生了深远的影响.受此启发,提出了基于参数估计和基于遗传算法的两种自动化加速卷积神经网络算法.该算法能够在给定精度损失范围内自动计算出最优的CNN加速模型,有效地解决了张量分解中,人工选择秩带来的误差问题,能够有效地提升CNN的压缩和加速效果.通过在MNIST和CIFAR-10数据集上的严格测试,与原网络相比,在MNIST数据集上准确率稍微下降了0.35%,模型的运行时间获得了4.1倍的大幅提升;在CIFAR-10数据集上,准确率稍微下降了5.13%,模型的运行时间获得了0.8倍的大幅提升.

关 键 词:张量分解  卷积神经网络  自动化机器学习  神经网络压缩  神经网络加速
收稿时间:2019-11-01
修稿时间:2020-02-05

Automated Tensor Decomposition to Accelerate Convolutional Neural Networks
SONG Bing-Bing,ZHANG Hao,WU Zi-Feng,LIU Jun-Hui,LIANG Yu,ZHOU Wei. Automated Tensor Decomposition to Accelerate Convolutional Neural Networks[J]. Journal of Software, 2021, 32(11): 3468-3481
Authors:SONG Bing-Bing  ZHANG Hao  WU Zi-Feng  LIU Jun-Hui  LIANG Yu  ZHOU Wei
Affiliation:School of Information Science and Engineering, Yunnan University, Kunming 650504, China;National Pilot School of Software, Yunnan University, Kunming 650504, China;Engineering Research Center of Cyberspace, Yunnan University, Kunming 650504, China
Abstract:Recently, convolutional neural network (CNN) have demonstrated strong performance and are widely used in many fields. Due to the large number of CNN parameters and high storage and computing power requirements, it is difficult to deploy on resource-constrained devices. Therefore, compression and acceleration of CNN models have become an urgent problem to be solved. With the research and development of automatic machine learning (AutoML), AutoML has profoundly impacted the development of neural networks. Inspired by this, this study proposes two automated accelerated CNN algorithms based on parameter estimation and genetic algorithms, which can calculate the optimal accelerated CNN model within a given accuracy loss range, effectively solving the error caused by artificially selected rank in tensor decomposition. It can effectively improve the compression and acceleration effects of the convolutional neural network. By rigorous testing on the MNIST and CIFAR-10 data sets, the accuracy rate on the MNIST dataset is slightly reduced by 0.35% compared to the original network, and the running time of the model is greatly reduced by 4.1 times, the accuracy rate on the CIFAR-10 dataset dropped slightly by 5.13%, and the running time of the model was greatly decreased by 0.8 times.
Keywords:tensor decomposition  convolutional neural network  automatic machine learning  neural network compression  neural network acceleration
本文献已被 万方数据 等数据库收录!
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号