首页 | 官方网站   微博 | 高级检索  
     

一种高效的稀疏卷积神经网络加速器的设计与实现
引用本文:余成宇,,李志远,,毛文宇,鲁华祥,,,.一种高效的稀疏卷积神经网络加速器的设计与实现[J].智能系统学报,2020,15(2):323-333.
作者姓名:余成宇    李志远    毛文宇  鲁华祥      
作者单位:1. 中国科学院 半导体研究所, 北京 100083;2. 中国科学院大学, 北京 100089;3. 中国科学院 脑科学与智能技术卓越创新中心, 上海 200031;4. 半导体神经网络智能感知与计算技术北京市重点实验室, 北京 100083
摘    要:针对卷积神经网络计算硬件化实现困难的问题,之前大部分卷积神经网络加速器的设计都集中于解决计算性能和带宽瓶颈,忽视了卷积神经网络稀疏性对加速器设计的重要意义,近来少量的能够利用稀疏性的卷积神经网络加速器设计也往往难以同时兼顾计算灵活度、并行效率和资源开销。本文首先比较了不同并行展开方式对利用稀疏性的影响,分析了利用稀疏性的不同方法,然后提出了一种能够利用激活稀疏性加速卷积神经网络计算的同时,相比于同领域其他设计,并行效率更高、额外资源开销更小的并行展开方法,最后完成了这种卷积神经网络加速器的设计并在FPGA上实现。研究结果表明:运行VGG-16网络,在ImageNet数据集下,该并行展开方法实现的稀疏卷积神经网络加速器和使用相同器件的稠密网络设计相比,卷积性能提升了108.8%,整体性能提升了164.6%,具有明显的性能优势。

关 键 词:卷积神经网络  稀疏性  嵌入式FPGA  ReLU  硬件加速  并行计算  深度学习

Design and implementation of an efficient accelerator for sparse convolutional neural network
YU Chengyu,,LI Zhiyuan,,MAO Wenyu,LU Huaxiang,,,.Design and implementation of an efficient accelerator for sparse convolutional neural network[J].CAAL Transactions on Intelligent Systems,2020,15(2):323-333.
Authors:YU Chengyu    LI Zhiyuan    MAO Wenyu  LU Huaxiang      
Affiliation:1. Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China;2. University of Chinese Academy of Sciences, Beijing 100089, China;3. Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Scienc
Abstract:To address the difficulty experienced by convolutional neural networks (CNNs) in computing hardware implementation, most previous designs of convolutional neural network accelerators have focused on solving the computation performance and bandwidth bottlenecks, while ignoring the importance of CNN sparsity to accelerator design. Recently, it has often been difficult to simultaneously achieve computational flexibility, parallel efficiency, and resource overhead using the small number of CNN accelerator designs capable of utilizing sparsity. In this paper, we first analyze the effects of different parallel expansion methods on the use of sparsity, analyze different methods that utilize sparsity, and then propose a parallel expansion method that can accelerate CNNs with activated sparsity to achieve higher parallelism efficiency and lower additional resource cost, as compared with other designs. Lastly, we complete the design of this CNN accelerator and implemented it on FPGA. The results show that compared with a dense network design using the same device, the acceleration performance of the VGG-16 network was increased by 108.8% and its overall performance was improved by 164.6%, which has obvious performance advantages.
Keywords:convolutional neural network  sparsity  embedded FPGA  ReLU  hardware acceleration  parallel computing  deep learning
点击此处可从《智能系统学报》浏览原始摘要信息
点击此处可从《智能系统学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号