首页 | 官方网站   微博 | 高级检索  
     


Compressing convolutional neural networks with cheap convolutions and online distillation
Affiliation:1. School of Design Art and Media, Nanjing University of Science and Technology, China;2. College of Mechanical and Electronic, Hohai University, China;3. UCL Interaction Centre, University College London, United Kingdom;4. Wuxi Little Swan Electric Co., Ltd, China;1. School of Design Art and Media, Nanjing University of Science and Technology, China;2. School of Communications, Royal College of Art, UK;3. Key Laboratory of Ministry Industry and Information Technology for Language Information Processing and Application, China;4. School of Computer Science, University of Birmingham, UK;1. Special Display and Imaging Technology Innovation Center of Anhui Province, National Engineering Laboratory of Special Display Technology, Hefei University of Technology, Hefei, Anhui 230009, China;2. Academy of Opto-electric Technology, Hefei University of Technology, Hefei, Anhui 230009, China;3. School of Instrument Science and Opto-electronics Engineering, Hefei University of Technology, Hefei, Anhui 230009, China
Abstract:Visual impairment assistance systems play a vital role in improving the standard of living for visually impaired people (VIP). With the development of deep learning technologies and assistive devices, many assistive technologies for VIP have achieved remarkable success in environmental perception and navigation. In particular, convolutional neural network (CNN)-based models have surpassed the level of human recognition and achieved a strong generalization ability. However, the large memory and computation consumption in CNNs have been one of the main barriers to deploying them into resource-limited systems for visual impairment assistance applications. To this end, most cheap convolutions (e.g., group convolution, depth-wise convolution, and shift convolution) have recently been used for memory and computation reduction but with a specific architecture design. Furthermore, it results in a low discriminability of the compressed networks by directly replacing the standard convolution with these cheap ones. In this paper, we propose to use knowledge distillation to improve the performance of compact student networks with cheap convolutions. In our case, the teacher is a network with the standard convolution, while the student is a simple transformation of the teacher architecture without complicated redesigning. In particular, we introduce a novel online distillation method, which online constructs the teacher network without pre-training and conducts mutual learning between the teacher and student network, to improve the performance of the student model. Extensive experiments demonstrate that the proposed approach achieves superior performance to simultaneously reduce memory and computation overhead of cutting-edge CNNs on different datasets, including CIFAR-10/100 and ImageNet ILSVRC 2012, compared to the previous CNN compression and acceleration methods. The codes are publicly available at https://github.com/EthanZhangYC/OD-cheap-convolution.
Keywords:Cheap convolution  Knowledge distillation  Online distillation  CNN compression and acceleration
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号