首页 | 官方网站   微博 | 高级检索  
     

基于多特征融合的多尺度生成对抗网络图像修复算法
引用本文:陈刚,廖永为,杨振国,刘文印.基于多特征融合的多尺度生成对抗网络图像修复算法[J].计算机应用,2023,43(2):536-544.
作者姓名:陈刚  廖永为  杨振国  刘文印
作者单位:广东工业大学 计算机学院,广州 510006
鹏城实验室 网络空间安全研究中心,广东 深圳 518005
基金项目:国家自然科学基金资助项目(62076073)
摘    要:针对多尺度生成式对抗网络图像修复算法(MGANII)在修复图像过程中训练不稳定、修复图像的结构一致性差以及细节和纹理不足等问题,提出了一种基于多特征融合的多尺度生成对抗网络的图像修复算法。首先,针对结构一致性差以及细节和纹理不足的问题,在传统的生成器中引入多特征融合模块(MFFM),并且引入了一个基于感知的特征重构损失函数来提高扩张卷积网络的特征提取能力,从而改善修复图像的细节性和纹理特征;然后,在局部判别器中引入了一个基于感知的特征匹配损失函数来提升判别器的鉴别能力,从而增强了修复图像的结构一致性;最后,在对抗损失函数中引入风险惩罚项来满足利普希茨连续条件,使得网络在训练过程中能快速稳定地收敛。在CelebA数据集上,所提的多特征融合的图像修复算法与MANGII相比能快速收敛,同时所提算法所修复图像的峰值信噪比(PSNR)、结构相似性(SSIM)比基线算法所修复图像分别提高了0.45%~8.67%和0.88%~8.06%,而Frechet Inception距离得分(FID)比基线算法所修复图像降低了36.01%~46.97%。实验结果表明,所提算法的修复性能优于基线算法。

关 键 词:多尺度  特征匹配  特征融合  图像修复  生成对抗网络
收稿时间:2022-01-07
修稿时间:2022-04-30

Image inpainting algorithm of multi-scale generative adversarial network based on multi-feature fusion
Gang CHEN,Yongwei LIAO,Zhenguo YANG,Wenying LIU.Image inpainting algorithm of multi-scale generative adversarial network based on multi-feature fusion[J].journal of Computer Applications,2023,43(2):536-544.
Authors:Gang CHEN  Yongwei LIAO  Zhenguo YANG  Wenying LIU
Affiliation:School of Computer Science and Technology,Guangdong University of Technology,Guangzhou Guangdong 510006,China
Cyberspace Security Research Center,Peng Cheng Laboratory,Shenzhen Guangdong 518005,China
Abstract:Aiming at the problems in Multi-scale Generative Adversarial Networks Image Inpainting algorithm (MGANII), such as unstable training in the process of image inpainting, poor structural consistency, insufficient details and textures of the inpainted image, an image inpainting algorithm of multi-scale generative adversarial network was proposed based on multi-feature fusion. Firstly, aiming at the problems of poor structural consistency and insufficient details and textures, a Multi-Feature Fusion Module (MFFM) was introduced in the traditional generator, and a perception-based feature reconstruction loss function was introduced to improve the ability of feature extraction in the dilated convolutional network, thereby supplying more details and texture features for the inpainted image. Then, a perception-based feature matching loss function was introduced into local discriminator to enhance the discrimination ability of the discriminator, thereby improving the structural consistency of the inpainted image. Finally, a risk penalty term was introduced into the adversarial loss function to meet the Lipschitz continuity condition, so that the network was able to converge rapidly and stably in the training process. On the dataset CelebA, compared with MANGII, the proposed multi-feature fusion image inpainting algorithm can converges faster. Meanwhile, the Peak Signal-to-Noise Ratio (PSNR) and Structural SIMilarity (SSIM) of the images inpainted by the proposed algorithm are improved by 0.45% to 8.67% and 0.88% to 8.06% respectively compared with those of the images inpainted by the baseline algorithms, and Frechet Inception Distance score (FID) of the images inpainted by the proposed algorithm is reduced by 36.01% to 46.97% than the images inpainted by the baseline algorithms. Experimental results show that the inpainting performance of the proposed algorithm is better than that of the baseline algorithms.
Keywords:multi-scale  feature matching  feature fusion  image inpainting  Generative Adversarial Network (GAN)  
点击此处可从《计算机应用》浏览原始摘要信息
点击此处可从《计算机应用》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号