首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Advances in generative adversarial network   总被引:1,自引:0,他引:1  
Generative adversarial network (GAN) have swiftly become the focus of considerable research in generative models soon after its emergence,whose academic research and industry applications have yielded a stream of further progress along with the remarkable achievements of deep learning.A broad survey of the recent advances in generative adversarial network was provided.Firstly,the research background and motivation of GAN was introduced.Then the recent theoretical advances of GAN on modeling,architectures,training and evaluation metrics were reviewed.Its state-of-the-art applications and the extensively used open source tools for GAN were introduced.Finally,issues that require urgent solutions and works that deserve further investigation were discussed.  相似文献   

2.
郭伟  庞晨 《电讯技术》2022,62(3):281-287
针对现有深度学习中图像数据集缺乏的问题,提出了一种基于深度卷积生成式对抗网络(Deep Convolutional Generative Adversarial Network, DCGAN)的图像数据集增强算法。该算法对DCGAN网络进行改进,首先在不过多增加计算量的前提下改进现有的激活函数,增强生成特征的丰富性与多样性;然后通过引入相对判别器有效缓解模式坍塌现象,从而提升模型稳定性;最后在现有生成器结构中引入残差块,获得相对高分辨率的生成图像。实验结果表明,将所提方法应用在MNIST、SAR和医学血细胞数据集上,图像数据增强效果与未改进的DCGAN网络相比显著提升。  相似文献   

3.
细胞核的精准分割是病理诊断的基础工作,针对目前分割算法存在细小特征提取难、细节丢失多等问题,本文提出了一种基于生成对抗网络(generative adversarial network, GAN)与ResUNet的分割网络。首先将ResUNet网络作为生成网络(generator, G),利用LeakyReLU激活函数使负值特征能够得到激活,其次再通过判别网络(discriminator, D)的判别损失值引导生成网络更好地学习。实验结果显示,在乳腺癌细胞核数据集和DSB数据集上MioU、Dice、Acc等评价指标分别达到82%、83%、95%和90%、90%、97%,较ResUNet网络分别提升了2.5%、3.3%、0.7%和0.7%、1.5%、0.8%。同时与SegNet、FCN8s等6种常用分割网络的分割结果对比均有提升,结果证明本文改进后的网络具有较好的分割准确率,可以为病理诊断工作提供重要依据。  相似文献   

4.
红外图像仿真在红外导引头设计、仿真训练中起到十分关键的作用.针对如何生成高分辨率、视觉特征可控的红外图像,提出了一种基于渐进式生成对抗网络的红外图像仿真方法.本文利用舰船模型的红外图像数据集训练了图像合成网络,输入随机特征向量,输出高分辨率的红外仿真图像;设计了图像编码网络,实现红外图像到特征向量的转换;利用Logis...  相似文献   

5.
由于红外与可见光图像特征差异大,并且不存在理想的融合图像监督网络学习源图像与融合图像之间的映射关系,深度学习在图像融合领域的应用受到了限制。针对此问题,提出了一个基于注意力机制和边缘损失函数的生成对抗网络框架,应用于红外与可见光图像融合。通过引入对抗训练和注意力机制的思想,将融合问题视为源图像和融合图像对抗的关系,并结合了通道注意力和空间注意力机制学习特征通道域和空间域的非线性关系,增强了显著性目标特征表达。同时提出了一种边缘损失函数,将源图像与融合图像像素之间的映射关系转化为边缘之间的映射关系。多个数据集的测试结果表明,该方法能有效融合红外目标和可见光纹理信息,锐化图像边缘,显著提高图像清晰度和对比度。  相似文献   

6.
Underwater images play an essential role in acquiring and understanding underwater information. High-quality underwater images can guarantee the reliability of underwater intelligent systems. Unfortunately, underwater images are characterized by low contrast, color casts, blurring, low light, and uneven illumination, which severely affects the perception and processing of underwater information. To improve the quality of acquired underwater images, numerous methods have been proposed, particularly with the emergence of deep learning technologies. However, the performance of underwater image enhancement methods is still unsatisfactory due to lacking sufficient training data and effective network structures. In this paper, we solve this problem based on a conditional generative adversarial network (cGAN), where the clear underwater image is achieved by a multi-scale generator. Besides, we employ a dual discriminator to grab local and global semantic information, which enforces the generated results by the multi-scale generator realistic and natural. Experiments on real-world and synthetic underwater images demonstrate that the proposed method performs favorable against the state-of-the-art underwater image enhancement methods.  相似文献   

7.
针对现有去雾算法的复原图像易出现颜色失真与细节丢失问题,提出了一种基于改进循环生成对抗网络(cycle-consistent generative adversarial networks,CycleGAN)的端到端图像去雾方法,并无需依赖于大气散射模型的约束。网络生成器整体采用Encoder-Decoder架构,同时为有效学习有雾图像与清晰图像间的映射关系,在训练优化目标中结合图像自身属性构建了增强的高频损失与特征损失函数,实现对不同数据域的特征鉴别并进一步保证图像纹理结构。此外为约束复原图像与真实清晰图像颜色的一致性,提出了二阶段学习策略。首先通过非配对数据集对改进CycleGAN进行弱监督训练学习,然后于第二阶段利用部分成对数据集以强监督方式训练正向生成网络,在提高去雾网络稳定性的同时,使复原效果更接近于真实清晰图像风格。实验结果表明,所提去雾方法的峰值信噪比(peak signal to noise ratio,PSNR)和结构相似性(structural similarity,SSIM)指标值相比同类CycleGAN算法分别提升了12.43%与5.53%,并且同其他方法在视觉效果与量化指标的对比结果中也验证了其性能的有效性。  相似文献   

8.
Generative adversarial networks (GAN) have been successfully used in many image restoration tasks, including image denoising, super‐resolution, and compression artifact reduction. By fully exploiting its characteristics, state‐of‐the‐art image restoration techniques can be used to generate images with photorealistic details. However, there are many applications that require faithful rather than visually appealing image reconstruction, such as medical imaging, surveillance, and video coding. We found that previous GAN‐training methods that used a loss function in the form of a weighted sum of fidelity and adversarial loss fails to reduce fidelity loss. This results in non‐negligible degradation of the objective image quality, including peak signal‐to‐noise ratio. Our approach is to alternate between fidelity and adversarial loss in a way that the minimization of adversarial loss does not deteriorate the fidelity. Experimental results on compression‐artifact reduction and super‐resolution tasks show that the proposed method can perform faithful and photorealistic image restoration.  相似文献   

9.
为增强融合图像的视觉效果,减少计算的复杂度,解决传统红外与可见光图像融合算法存在的背景细节丢失问题,提出了一种生成对抗网络框架下基于深度可分离卷积的红外与可见光图像融合方法。首先,在生成器中对源图像进行深度卷积与逐点卷积运算,得到源图像的特征映射信息;其次,通过前向传播的方式更新网络参数,得到初步的单通道融合图像;再次,在红外及可见光判别器中,使用深度可分离卷积分别对源图像与初步融合图像进行像素判别;最后,在损失函数的约束下,双判别器不断将更多的细节信息添加到融合图像中。实验结果表明,相比于传统的融合算法,该方法在信息熵、平均梯度、空间频率、标准差、结构相似性损失和峰值信噪比等评价指标上分别平均提高了1.63%、1.02%、3.54%、5.49%、1.05%、0.23%,在一定程度上提升了融合图像的质量,丰富了背景的细节信息。  相似文献   

10.
针对人脸超分辨率算法中图像失真大、缺乏细节特征等问题,提出了一种基于先验知识的人脸超分辨率重建模型。通过在超分网络中加入纹理辅助分支,为重建过程提供额外纹理结构先验,以生成精细的面部纹理,恢复高分辨率纹理图。同时引入级联叠加模块对纹理辅助分支进行反馈。设计特征融合模块,将纹理特征图与超分分支特征图融合,获得更好的纹理细节;将纹理损失融入损失函数,以提高网络恢复纹理细节的能力。4倍放大因子下,该方法的峰值信噪比(Peak Signal-to-Noise Ratio, PSNR)、结构相似性指数(Structural Similarity Index, SSIM)比现有方法至少提升1.082 5 dB和0.036,无参考图像质量评价(Natural Image Quality Evaluator, NIQE)至少降低1.690 2;8倍放大因子下,该方法的PSNR与SSIM值分别至少提升0.787 5 dB和0.046 85,NIQE值最小降低3.92。  相似文献   

11.
The World Health Organization provides guidelines for managing the particulate matter (PM) level because a higher PM level represents a threat to human health. To manage the PM level, a procedure for measuring the PM value is first needed. We use a PM sensor that collects the PM level by laser-based light scattering (LLS) method because it is more cost effective than a beta attenuation monitor-based sensor or tapered element oscillating microbalance-based sensor. However, an LLS-based sensor has a higher probability of malfunctioning than the higher cost sensors. In this paper, we regard the overall malfunctioning, including strange value collection or missing collection data as anomalies, and we aim to detect anomalies for the maintenance of PM measuring sensors. We propose a novel architecture for solving the above aim that we call the hypothesis pruning generative adversarial network (HP-GAN). Through comparative experiments, we achieve AUROC and AUPRC values of 0.948 and 0.967, respectively, in the detection of anomalies in LLS-based PM measuring sensors. We conclude that our HP-GAN is a cutting-edge model for anomaly detection.  相似文献   

12.
马乐  陈峰  李敏 《激光与红外》2020,50(2):246-251
由于硬件成本和拍摄条件等限制,很难直接获取高分辨率红外图像。生成对抗网络可以实现红外图像的超分辨率重建,但仍存在训练不稳定,训练时不收敛等不足。针对这些问题,本文使用Wasserstein距离代替KL散度,结合图像间的欧式距离构造新的损失函数,优化原有网络结构和算法流程,使网络更准确地学习低分辨率图像与重建图像的对应特征映射关系,网络训练更加稳定。实验结果表明,重建图像的边缘过渡平缓,目标细节得到有效保证,并获得了更好的客观评价结果。  相似文献   

13.
针对异源遥感影像成像模式、时相、分辨率等不同导致的图像匹配困难问题,提出了一种基于循环生成对抗策略的遥感图像匹配算法。构建了跨数据域图像特征迁移的循环生成对抗网络(generative adversarial network,GAN),设计SmoothL损失函数对网络进行优化,提高遥感图像特征提取精度,并基于图像特征迁移结果,建立三元组距离排序损失函数(trioplet margin ranking loss,TMRL)降低遥感图像的误匹配点数,实现异源遥感图像的准确匹配。实验结果表明,本文方法将异源遥感图像匹配平均准确率提升了33.51%,与CMM-Net(cross modlity matching net)方法相比,具有更好的遥感图像匹配效果。此外,本文方法不需要目标域图像的标注信息,匹配时间缩短了0.073s,能快速准确实现异源遥感图像匹配。  相似文献   

14.
15.
Image conversion has attracted mounting attention due to its practical applications. This paper proposes a lightweight network structure that can implement unpaired training sets to complete one-way image mapping, based on the generative adversarial network (GAN) and a fixed-parameter edge detection convolution kernel. Compared with the cycle consistent adversarial network (CycleGAN), the proposed network features simpler structure, fewer parameters (only 37.48% of the parameters in CycleGAN), and less training cost (only 35.47% of the GPU memory usage and 17.67% of the single iteration time in CycleGAN). Remarkably, the cyclic consistency becomes not mandatory for ensuring the consistency of the content before and after image mapping. This network has achieved significant processing effects in some image translation tasks, and its effectiveness and validity have been well demonstrated through typical experiments. In the quantitative classification evaluation based on VGG-16, the algorithm proposed in this paper has achieved superior performance.  相似文献   

16.
卢庆林  叶伟 《电讯技术》2020,(1):121-128
针对缺少合成孔径雷达(Synthetic Aperture Radar,SAR)目标图像数据导致的识别网络难以训练的问题,总结了现有的基于深度学习方法的解决方案。归纳了现阶段生成式对抗网络(Generative Adversarial Network,GAN)的发展情况,以及主要的衍生模型及其特点与优势。综述了GAN在SAR图像生成与风格迁移两方面的应用情况,并合理分析了应用中的技术难点和问题。最后结合深度学习的发展趋势,展望了GAN在SAR智能解译方面的应用。  相似文献   

17.
WSNs have a wide range of applications, and the effective Wireless Sensor Network (WSN) design includes the best energy optimization techniques. The nodes in wireless sensor networks run on batteries. The existing cluster head selection methods do not take into account the latency and rate of wireless network traffic when optimizing the node's energy constraints. To overcome these issues, a self-attention based generative adversarial network (SabGAN) with Aquila Optimization Algorithm (AqOA) is proposed for Multi-Objective Cluster Head Selection and Energy Aware Routing (SabGAN-AqOA-EgAwR-WSN) for secured data transmission in wireless sensor network. The proposed method implements the routing process through cluster head. SabGAN classifiers are utilized to select the CH based on firm fitness functions, including delay, detachment, energy, cluster density, and traffic rate. After the selection of the cluster head, the malicious node gains access to the cluster. Therefore, the ideal path selection is carried out by three parameters: trust, connectivity, and degree of amenity. These three parameters are optimized under proposed AqOA. The data are transferred to the base station with the support of optimum trust path. The proposed SabGAN-AqOA-EgAwR-WSN method is activated in NS2 simulator. Finally, the proposed SabGAN-AqOA-EgAwR-WSN method attains 12.5%, 32.5%, 59.5%, and 32.65% higher alive nodes; 85.71%, 81.25%, 82.63%, and 71.96% lower delay; and 52.25%, 61.65%, 37.83%, and 20.63% higher normalized network energy compared with the existing methods.  相似文献   

18.
In the field of affective computing (AC), coarse-grained AC has been developed and widely applied in many fields. Electroencephalogram (EEG) signals contain abundant emotional information. However, it is difficult to develop fine-grained AC due to the lack of fine-grained labeling data and suitable visualization methods for EEG data with fine labels. To achieve a fine mapping of EEG data directly to facial images, we propose a conditional generative adversarial network (cGAN) to establish the relationship between EEG data associated with emotions, a coarse label, and a facial expression image in this study. In addition, a corresponding training strategy is also proposed to realize the fine-grained estimation and visualization of EEG-based emotion. The experiments prove the reasonableness of the proposed method for the generation of fine-grained facial expressions. The image entropy of the generated image indicates that the proposed method can provide a satisfactory visualization of fine-grained facial expressions.  相似文献   

19.
In this paper, we present a novel deep generative facial parts swapping method: parts-swapping generative adversarial network (PSGAN). PSGAN independently handles facial parts, such as eyes (left eye and right eye), nose, mouth and jaw, which achieves facial parts swapping by replacing the target facial parts with source facial parts and reconstructing the entire face image with these parts. By separately modeling the facial parts in the form of region inpainting, the proposed method can successfully achieve highly photorealistic face swapping results, enabling users to freely manipulate facial parts. In addition, the proposed method is able to perform jaw editing based on sketch guidance information. Experimental results on the CelebA dataset suggest that our method achieves superior performance for facial parts swapping and provides higher user control flexibility.  相似文献   

20.
特征提取是高光谱数据处理领域的一个重要研究 内容。高光谱数据获取过程中的复 杂性使传统的特征提取方法无法良好地处理高光谱图像。同时,高光谱图像标记样本数量的 有限性,也为常用于特征提取的监督深度学习方法带来不利影响。为了摆脱对高光谱图像中 标记样本的依赖,在卷积神经网络的基础上引入生成对抗网络,针对光谱特征提出了一种无 监督的高光谱图像特征提取的方法。为了稳定网络的训练过程,提高生成对抗网络中判别器 的特征表示能力,在目标函数中引入梯度惩罚项,将判别器的性能不断逼向最优。在特征提 取阶段,针对高光谱图像的光谱结构,提出了一种通道最大池化方法,能够在降低数据维度 的同时尽可能保留高光谱图像的光谱信息。使用支持向量机(support vector machines,SVM)和k近邻(k-hearest neighbor,KNN)方法对 提取到的特征进行分类测试。在两个真实数据集上的实验结果表明,提出的方法优于传统的 特征提取方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号