首页 | 官方网站   微博 | 高级检索  
     

基于SAU-NetDCGAN的天气云图生成方法
引用本文:杨鹏熙,侯进,游玺,任东升,杜茂生.基于SAU-NetDCGAN的天气云图生成方法[J].计算机应用研究,2023,40(5).
作者姓名:杨鹏熙  侯进  游玺  任东升  杜茂生
作者单位:西南交通大学,西南交通大学,西南交通大学,西南交通大学,西南交通大学
基金项目:国家重点研发计划资助项目(2020YFB1711902);四川省科技计划资助项目(2020SYSY0016)
摘    要:天文台天气监测系统对天气云图存在巨大需求。为解决传统的生成对抗网络在扩充天气云图数据集时模型不稳定以及图像特征丢失等问题,提出一种基于SAU-NetDCGAN的双层嵌入式对抗网络天气云图生成方法,该方法由两层网络相互嵌套组成。首先,第一层嵌入式网络是将U型网络添加到生成对抗式网络的生成器中,该网络作为基础架构,利用编码器与解码器之间的跳跃连接增强图像的边缘特征恢复能力;接着,第二层嵌入式网络是将简化参数注意力机制(simplify-attention,SA)添加到U型网络中,该注意力机制通过简化参数降低了模型复杂度,有效地改善了图像暗部特征丢失的问题;最后设计了一种新的权重计算方式,加强了各特征之间的联系,增加了对图像细节纹理特征的提取。实验结果表明,该方法生成的图像在清晰度、色彩饱和度上与传统的生成对抗网络相比图像质量更好,在峰值信噪比、结构相似性的评价指标下分别提高了27.06 dB和 0.606 5。

关 键 词:深度学习    图像生成    生成式对抗网络    U-Net    注意力机制
收稿时间:2022/8/15 0:00:00
修稿时间:2023/4/13 0:00:00

Weather cloud image generation method based on SAU-NetDCGAN
Affiliation:Southwest Jiaotong University,,,,
Abstract:There is a huge demand for weather cloud images in the observatory''s weather monitoring system. In order to solve the problems of model instability and loss of image features when the conventional generative adversarial network expands the dataset of the weather cloud images, this paper proposed a double-layer embedded adversarial image generation method based on SAU-NetDCGAN, this method consisted of two layers of networks which were nested within each other. Firstly, by the first layer of embedded network, it added the U-shaped network to the generator of the generative adversarial network. This network acted as the basic architecture and enhanced the feature recovery capability of the image by using the jump connection between the encoder and the decoder. Secondly, by the second layer of embedded network, it added SA to the U-shaped network. This attention mechanism reduced the complexity of the model by simplifying the parameters, improved effectively the feature loss in the dark part of the image. Finally, it developed a new weight calculation method to strengthen the connection between each features and improved the extraction of detail texture features from the images. The experimental results show that the quality of the images generated by this method is better than that of the conventional generative adversarial network in terms of sharpness and saturation. The evaluation indicators PSNR and SSIM have increased by 27.06 dB and 0.606 5 respectively.
Keywords:deep learning  image generation  generative adversarial network  U-Net  attention mechanism
点击此处可从《计算机应用研究》浏览原始摘要信息
点击此处可从《计算机应用研究》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号