首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Impressive progress has been made recently in image-to-image translation using generative adversarial networks (GANs). However, existing methods often fail in translating source images with noise to target domain. To address this problem, we joint image-to-image translation with image denoising and propose an enhanced generative adversarial network (EGAN). In particular, built upon pix2pix, we introduce residual blocks in the generator network to capture deeper multi-level information between source and target image distribution. Moreover, a perceptual loss is proposed to enhance the performance of image-to-image translation. As demonstrated through extensive experiments, our proposed EGAN can alleviate effects of noise in source images, and outperform other state-of-the-art methods significantly. Furthermore, we experimentally indicate that the proposed EGAN is also effective when applied to image denoising.  相似文献   

2.
马乐  陈峰  李敏 《激光与红外》2020,50(2):246-251
由于硬件成本和拍摄条件等限制,很难直接获取高分辨率红外图像。生成对抗网络可以实现红外图像的超分辨率重建,但仍存在训练不稳定,训练时不收敛等不足。针对这些问题,本文使用Wasserstein距离代替KL散度,结合图像间的欧式距离构造新的损失函数,优化原有网络结构和算法流程,使网络更准确地学习低分辨率图像与重建图像的对应特征映射关系,网络训练更加稳定。实验结果表明,重建图像的边缘过渡平缓,目标细节得到有效保证,并获得了更好的客观评价结果。  相似文献   

3.
This paper proposes AMEA-GAN, an attention mechanism enhancement algorithm. It is cycle consistency-based generative adversarial networks for single image dehazing, which follows the mechanism of the human retina and to a great extent guarantees the color authenticity of enhanced images. To address the color distortion and fog artifacts in real-world images caused by most image dehazing methods, we refer to the human visual neurons and use the attention mechanism of similar Horizontal cell and Amazon cell in the retina to improve the structure of the generator adversarial networks. By introducing our proposed attention mechanism, the effect of haze removal becomes more natural without leaving any artifacts, especially in the dense fog area. We also use an improved symmetrical structure of FUNIE-GAN to improve the visual color perception or the color authenticity of the enhanced image and to produce a better visual effect. Experimental results show that our proposed model generates satisfactory results, that is, the output image of AMEA-GAN bears a strong sense of reality. Compared with state-of-the-art methods, AMEA-GAN not only dehazes images taken in daytime scenes but also can enhance images taken in nighttime scenes and even optical remote sensing imagery.  相似文献   

4.
Compression of remote-sensing images can be necessary in various stages of the image life, and especially on-board a satellite before transmission to the ground station. Although on-board CPU power is quite limited, it is now possible to implement sophisticated real-time compression techniques, provided that complexity constraints are taken into account at design time. In this paper we consider the class-based multispectral image coder originally proposed in [Gelli and Poggi, Compression of multispectral images by spectral classification and transform coding, IEEE Trans. Image Process. (April 1999) 476–489 [5]] and modify it to allow its use in real time with limited hardware resources. Experiments carried out on several multispectral images show that the resulting unsupervised coder has a fully acceptable complexity, and a rate–distortion performance which is superior to that of the original supervised coder, and comparable to that of the best coders known in the literature.  相似文献   

5.
In this paper, we present a novel deep generative facial parts swapping method: parts-swapping generative adversarial network (PSGAN). PSGAN independently handles facial parts, such as eyes (left eye and right eye), nose, mouth and jaw, which achieves facial parts swapping by replacing the target facial parts with source facial parts and reconstructing the entire face image with these parts. By separately modeling the facial parts in the form of region inpainting, the proposed method can successfully achieve highly photorealistic face swapping results, enabling users to freely manipulate facial parts. In addition, the proposed method is able to perform jaw editing based on sketch guidance information. Experimental results on the CelebA dataset suggest that our method achieves superior performance for facial parts swapping and provides higher user control flexibility.  相似文献   

6.
宣萌  刘坤 《光电子.激光》2022,33(7):770-777
本文针对仅有少量带标签样本时如何提高大量未标 注样本分类的的鲁棒性和准确性问题,提出一种 基于改进的半监督生成对抗网络(semi-supvised generative adversarial networks,SGAN) 的乳腺癌图像分类方法。该方法在输出层使用Softmax 函数 替代 Sigmoid 函数实现多分类。首先将随机向量输入到生成网络中,生成伪样本并标记为伪样本 类进行训 练。接着将真实标签样本、真实无标签样本和伪样本输入到判别网络中,输出为不同类概率 值;然后采 用半监督训练方法反向传播更新参数;最后实现对乳腺癌病理图像的分类,标注样本数量分 别为25、 50和200,最终准 确率达到95.5%。实验结果表明,当标注 样本有限时,本文算法的准确 率具有良好 的鲁棒性。本文算法相比于使用卷积神经网络和迁移学习(tranfer learning,TL)等分类方法准确率有了显著提高。  相似文献   

7.
8.
Conventional face image generation using generative adversarial networks (GAN) is limited by the quality of generated images since generator and discriminator use the same backpropagation network. In this paper, we discuss algorithms that can improve the quality of generated images, that is, high-quality face image generation. In order to achieve stability of network, we replace MLP with convolutional neural network (CNN) and remove pooling layers. We conduct comprehensive experiments on LFW, CelebA datasets and experimental results show the effectiveness of our proposed method.  相似文献   

9.
Generative adversarial networks (GAN) have been successfully used in many image restoration tasks, including image denoising, super‐resolution, and compression artifact reduction. By fully exploiting its characteristics, state‐of‐the‐art image restoration techniques can be used to generate images with photorealistic details. However, there are many applications that require faithful rather than visually appealing image reconstruction, such as medical imaging, surveillance, and video coding. We found that previous GAN‐training methods that used a loss function in the form of a weighted sum of fidelity and adversarial loss fails to reduce fidelity loss. This results in non‐negligible degradation of the objective image quality, including peak signal‐to‐noise ratio. Our approach is to alternate between fidelity and adversarial loss in a way that the minimization of adversarial loss does not deteriorate the fidelity. Experimental results on compression‐artifact reduction and super‐resolution tasks show that the proposed method can perform faithful and photorealistic image restoration.  相似文献   

10.
为增强融合图像的视觉效果,减少计算的复杂度,解决传统红外与可见光图像融合算法存在的背景细节丢失问题,提出了一种生成对抗网络框架下基于深度可分离卷积的红外与可见光图像融合方法。首先,在生成器中对源图像进行深度卷积与逐点卷积运算,得到源图像的特征映射信息;其次,通过前向传播的方式更新网络参数,得到初步的单通道融合图像;再次,在红外及可见光判别器中,使用深度可分离卷积分别对源图像与初步融合图像进行像素判别;最后,在损失函数的约束下,双判别器不断将更多的细节信息添加到融合图像中。实验结果表明,相比于传统的融合算法,该方法在信息熵、平均梯度、空间频率、标准差、结构相似性损失和峰值信噪比等评价指标上分别平均提高了1.63%、1.02%、3.54%、5.49%、1.05%、0.23%,在一定程度上提升了融合图像的质量,丰富了背景的细节信息。  相似文献   

11.
Sketch based image retrieval (SBIR), which uses free-hand sketches to search the images containing similar objects/scenes, is attracting more and more attentions as sketches could be got more easily with the development of touch devices. However, this task is difficult as the huge differences between sketches and images. In this paper, we propose a cross-domain representation learning framework to reduce these differences for SBIR. This framework aims to transfer sketches to images with the information learned both in the sketch domain and image domain by the proposed domain migration generative adversarial network (DMGAN). Furthermore, to reduce the representation gap between the generated images and natural images, a similarity learning network (SLN) is also proposed with the new designed loss function incorporating semantic information. Extensive experiments have been done from different aspects, including comparison with state-of-the-art methods. The results show that the proposed DMGAN and SLN really work for SBIR.  相似文献   

12.
Underwater images play an essential role in acquiring and understanding underwater information. High-quality underwater images can guarantee the reliability of underwater intelligent systems. Unfortunately, underwater images are characterized by low contrast, color casts, blurring, low light, and uneven illumination, which severely affects the perception and processing of underwater information. To improve the quality of acquired underwater images, numerous methods have been proposed, particularly with the emergence of deep learning technologies. However, the performance of underwater image enhancement methods is still unsatisfactory due to lacking sufficient training data and effective network structures. In this paper, we solve this problem based on a conditional generative adversarial network (cGAN), where the clear underwater image is achieved by a multi-scale generator. Besides, we employ a dual discriminator to grab local and global semantic information, which enforces the generated results by the multi-scale generator realistic and natural. Experiments on real-world and synthetic underwater images demonstrate that the proposed method performs favorable against the state-of-the-art underwater image enhancement methods.  相似文献   

13.
郭伟  庞晨 《电讯技术》2022,62(3):281-287
针对现有深度学习中图像数据集缺乏的问题,提出了一种基于深度卷积生成式对抗网络(Deep Convolutional Generative Adversarial Network, DCGAN)的图像数据集增强算法。该算法对DCGAN网络进行改进,首先在不过多增加计算量的前提下改进现有的激活函数,增强生成特征的丰富性与多样性;然后通过引入相对判别器有效缓解模式坍塌现象,从而提升模型稳定性;最后在现有生成器结构中引入残差块,获得相对高分辨率的生成图像。实验结果表明,将所提方法应用在MNIST、SAR和医学血细胞数据集上,图像数据增强效果与未改进的DCGAN网络相比显著提升。  相似文献   

14.
Underwater image enhancement algorithms have attracted much attention in underwater vision task. However, these algorithms are mainly evaluated on different datasets and metrics. In this paper, we utilize an effective and public underwater benchmark dataset including diverse underwater degradation scenes to enlarge the test scale and propose a fusion adversarial network for enhancing real underwater images. Meanwhile, the multiple inputs and well-designed multi-term adversarial loss can not only introduce multiple input image features, but also balance the impact of multi-term loss functions. The proposed network tested on the benchmark dataset achieves better or comparable performance than the other state-of-the-art methods in terms of qualitative and quantitative evaluations. Moreover, the ablation study experimentally validates the contributions of each component and hyper-parameter setting of loss functions.  相似文献   

15.
With the development of generative adversarial network (GANs) technology, the technology of GAN generates images has evolved dramatically. Distinguishing these GAN generated images is challenging for the human eye. Moreover, the GAN generated fake images may cause some behaviors that endanger society and bring great security problems to society. Research on GAN generated image detection is still in the exploratory stage and many challenges remain. Motivated by the above problem, we propose a novel GAN image detection method based on color gradient analysis. We consider the difference in color information between real images and GAN generated images in multiple color spaces, and combined the gradient information and the directional texture information of the generated images to extract the gradient texture features for GAN generated images detection. Experimental results on PGGAN and StyleGAN2 datasets demonstrate that the proposed method achieves good performance, and is robust to other various perturbation attacks.  相似文献   

16.
针对异源遥感影像成像模式、时相、分辨率等不同导致的图像匹配困难问题,提出了一种基于循环生成对抗策略的遥感图像匹配算法。构建了跨数据域图像特征迁移的循环生成对抗网络(generative adversarial network,GAN),设计SmoothL损失函数对网络进行优化,提高遥感图像特征提取精度,并基于图像特征迁移结果,建立三元组距离排序损失函数(trioplet margin ranking loss,TMRL)降低遥感图像的误匹配点数,实现异源遥感图像的准确匹配。实验结果表明,本文方法将异源遥感图像匹配平均准确率提升了33.51%,与CMM-Net(cross modlity matching net)方法相比,具有更好的遥感图像匹配效果。此外,本文方法不需要目标域图像的标注信息,匹配时间缩短了0.073s,能快速准确实现异源遥感图像匹配。  相似文献   

17.
针对现有去雾算法的复原图像易出现颜色失真与细节丢失问题,提出了一种基于改进循环生成对抗网络(cycle-consistent generative adversarial networks,CycleGAN)的端到端图像去雾方法,并无需依赖于大气散射模型的约束。网络生成器整体采用Encoder-Decoder架构,同时为有效学习有雾图像与清晰图像间的映射关系,在训练优化目标中结合图像自身属性构建了增强的高频损失与特征损失函数,实现对不同数据域的特征鉴别并进一步保证图像纹理结构。此外为约束复原图像与真实清晰图像颜色的一致性,提出了二阶段学习策略。首先通过非配对数据集对改进CycleGAN进行弱监督训练学习,然后于第二阶段利用部分成对数据集以强监督方式训练正向生成网络,在提高去雾网络稳定性的同时,使复原效果更接近于真实清晰图像风格。实验结果表明,所提去雾方法的峰值信噪比(peak signal to noise ratio,PSNR)和结构相似性(structural similarity,SSIM)指标值相比同类CycleGAN算法分别提升了12.43%与5.53%,并且同其他方法在视觉效果与量化指标的对比结果中也验证了其性能的有效性。  相似文献   

18.
结合视觉传感器网络的协同工作特性,提出一种基于散度模型的图像压缩机制.理论分析和实验结果表明,该压缩机制不仅可以减少图像数据量,而且由于压缩后每字节数据所含信息量由各簇内节点的二值量化像素均分,不会引起传输错误在图像中大面积扩散.相比于采用传统的图像压缩算法,随着平均分组丢失率的增高,接收图像峰值信噪比较高.  相似文献   

19.
红外图像仿真在红外导引头设计、仿真训练中起到十分关键的作用.针对如何生成高分辨率、视觉特征可控的红外图像,提出了一种基于渐进式生成对抗网络的红外图像仿真方法.本文利用舰船模型的红外图像数据集训练了图像合成网络,输入随机特征向量,输出高分辨率的红外仿真图像;设计了图像编码网络,实现红外图像到特征向量的转换;利用Logis...  相似文献   

20.
心电信号分析是预防心血管疾病的重要举措,QRS波的精确检测不仅是心电信号处理的关键步骤且对心率计算和异常情况分析具有重要作用.针对动态心电信号存在信号质量差或异常节奏波形导致常用QRS波检测方法精度较低的问题,本文提出了 一种基于生成对抗网络新型QRS波检测算法.该算法以Pix2Pix网络为基础,生成网络采用U-Net...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号