首页 | 官方网站   微博 | 高级检索  
     


EdgeGAN: One-way mapping generative adversarial network based on the edge information for unpaired training set
Affiliation:1. College of Electrical and Information Engineering, Hunan University, Changsha 410082, China;2. National Engineering Laboratory for Robot Vision Perception and Control, Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China;3. Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G2E1, Canada;4. Department of Mechanical Engineering, York University, Toronto, ON M3J1P3, Canada;1. University of Cambridge, Computer Laboratory, 15 JJ Thomson Avenue, Cambridge CB30FD United Kingdom;2. Scientific Visualization and Computer Graphics group (SVCG), Johann Bernoulli Institute for Mathematics and Computer Science (JBI), Nijenborgh 9, 9747 AG Groningen, The Netherlands;3. School of Engineering and Computer Science, Cotton Building, Victoria University of Wellington, Gate 6, Kelburn Parade, Wellington 6140, New Zealand;1. National Cheng Kung University, Tainan, Taiwan;2. National Chung Cheng University, Chiayi, Taiwan;1. Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran;2. Integrated Circuits and Electronics Lab., Department of Engineering, Aarhus University, Aarhus, Denmark;3. Department of Engineering, Electrical and Computer Engineering, Aarhus University, Denmark;1. Xi''an University of Technology, Xi''an, Shaanxi 710048, PR China;2. Institute of Artificial Intelligence and Robotics at Xi''an Jiaotong University, Xi''an, Shaanxi 710049, PR China
Abstract:Image conversion has attracted mounting attention due to its practical applications. This paper proposes a lightweight network structure that can implement unpaired training sets to complete one-way image mapping, based on the generative adversarial network (GAN) and a fixed-parameter edge detection convolution kernel. Compared with the cycle consistent adversarial network (CycleGAN), the proposed network features simpler structure, fewer parameters (only 37.48% of the parameters in CycleGAN), and less training cost (only 35.47% of the GPU memory usage and 17.67% of the single iteration time in CycleGAN). Remarkably, the cyclic consistency becomes not mandatory for ensuring the consistency of the content before and after image mapping. This network has achieved significant processing effects in some image translation tasks, and its effectiveness and validity have been well demonstrated through typical experiments. In the quantitative classification evaluation based on VGG-16, the algorithm proposed in this paper has achieved superior performance.
Keywords:Lightweight generative adversarial network  Image conversion  Image-to-image translation  Unpaired image-to-image translation
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号