首页 | 官方网站   微博 | 高级检索  
     

基于深度反向投影的感知增强超分辨率重建模型
引用本文:杨书广.基于深度反向投影的感知增强超分辨率重建模型[J].应用光学,2021,42(4):691.
作者姓名:杨书广
作者单位:西安建筑科技大学 理学院,陕西 西安 710055
基金项目:国家自然科学基金(61403298);陕西省自然科学基金(2015JM1024)
摘    要:以SRCNN(super-resolution convolutional neural network)模型为代表的超分辨率重建模型通常都有很高的PSNR(peak signal to noise ratio)和SSIM(structural similarity)值,但其在视觉感知上并不令人满意,而以SRGAN为代表的拥有高感知质量的GAN(generative adversarial networks)模型却很容易产生大量的伪细节,这表现在其PSNR和SSIM值通常都较低。针对上述问题,提出了一种基于深度反向投影的感知增强超分辨率重建模型。该模型采用双尺度自适应加权融合特征提取模块进行特征提取,然后通过深度反向投影进行上采样,最终由增强模块增强后得到最终输出。模型采用残差连接与稠密连接,有助于特征的共享以及模型的有效训练。在指标评价上,引入了基于学习的LPIPS(learned perceptual image patch similarity)度量作为新的图像感知质量评价指标,与PSNR、SSIM一起作为模型评价指标。实验结果表明,模型在测试数据集上PSNR、SSIM、LPIPS的平均值分别为27.84、0.7320、0.1258,各项指标均优于对比算法。

关 键 词:超分辨率重建    感知质量    深度反向投影    LPIPS度量
收稿时间:2021-01-25

Perceptually enhanced super-resolution reconstruction model based on deep back projection
Affiliation:School of Science, Xi’an University of Architecture and Technology, Xi’an 710055, China
Abstract:The super-resolution reconstruction models represented by the super-resolution convolutional?neural?network (SRCNN) models usually have high peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM) values, but its visual perception is not satisfactory. And the generative adversarial networks (GAN) models represented by the super-resolution generative adversarial networks (SRGAN) having high perceptual quality is prone to produce a lot of false details, which is manifested in its low PSNR and SSIM values. To solve the above problems, a perceptually enhanced super-resolution reconstruction model based on deep back projection was proposed. The dual-scale self-adaptive weighted fusion feature extraction module was adopted by this model for feature extraction, then the sampling was carried out by the deep back projection, and finally the final output was obtained after the enhanced module was enhanced. The residual connections and dense connections were adopted by the model, which facilitated the features sharing and the effective training of the model. In the index evaluation, the learned perceptual image patch similarity (LPIPS) metric based on the learning was introduced as a new quality evaluation index of image perception, together with PSNR and SSIM as the model evaluation index. The experimental results show that the average values of PSNR, SSIM, and LPIPS of the model on the test data set are 27.84, 0.7320, and 0.1258, respectively, and all the indicators are better than the comparison algorithm.
Keywords:
本文献已被 CNKI 等数据库收录!
点击此处可从《应用光学》浏览原始摘要信息
点击此处可从《应用光学》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号