首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
易拓源  户盼鹤  刘振 《信号处理》2023,39(2):323-334
图像超分辨是解决ISAR欺骗干扰中由于模型样本不完备导致难以对大带宽ISAR实现高逼真假目标模拟的重要手段。利用生成对抗网络(GAN)可通过端到端映射实现ISAR图像的超分辨,然而,当测试输入样本与训练输入样本分辨率差异较大时,超分辨图像中会出现伪散射点从而导致目标失真。考虑到循环生成对抗网络(CycleGAN)对输入样本差异适应性较好,本文提出了一种基于改进CycleGAN的ISAR欺骗干扰超分辨样本生成方法,分别从损失函数、优化过程、判别器结构三方面对CycleGAN网络结构进行改进,加快了网络的收敛速度,同时对于输入分辨率差异较大的ISAR图像泛化性能更好。利用暗室测量数据验证了所提方法的有效性,与GAN方法相比,对于训练输入样本分辨率差异较大的测试输入样本,生成的超分辨样本散射点位置与真实数据具有更好的匹配效果。  相似文献   

2.
由于强大的高质量图像生成能力,生成对抗网络在图像融合和图像超分辨率等计算机视觉的研究中得到了广泛关注。目前基于生成对抗网络的遥感图像融合方法只使用网络学习图像之间的映射,缺乏对遥感图像中特有的全锐化领域知识的应用。该文提出一种融入全色图空间结构信息的优化生成对抗网络遥感图像融合方法。通过梯度算子提取全色图空间结构信息,将提取的特征同时加入判别器和具有多流融合架构的生成器,设计相应的优化目标和融合规则,从而提高融合图像的质量。结合WorldView-3卫星获取的图像进行实验,结果表明,所提方法能够生成高质量的融合图像,在主观视觉和客观评价指标上都优于大多先进的遥感图像融合方法。  相似文献   

3.
针对循环生成对抗网络(Cycle Generative Adversarial Networks, CycleGAN)在浑浊水体图像增强中存在质量差和速度慢的问题,该文提出一种可扩展、可选择和轻量化的特征提取单元BSDK (Bottleneck Selective Dilated Kernel),并利用BSDK设计了一个新的生成器网络BSDKNet。与此同时,提出一种多尺度损失函数MLF(Multi-scale Loss Function)。在自建的浑浊水体图像增强数据集TC(Turbid and Clear)上,该文BM-CycleGAN比原始CycleGAN的精度提升3.27%,生成器网络参数降低4.15MB,运算时间减少0.107s。实验结果表明BM-CycleGAN适合浑浊水体图像增强任务。  相似文献   

4.
针对传统生成对抗网络(Generative Adversarial Networks,GAN)在图像翻译过程中生成图像的轮廓、纹理等特征丢失以及造成图像翻译效果不佳的问题,提出了基于改进U-Net模型的生成对抗网络图像翻译算法。首先,实验研究Pix2Pix生成对抗网络优化算法、学习率以及迭代次数对图像翻译效果的影响,确定生成对抗网络模型参数与优化方法;其次,通过增加反卷积跳跃连接的重复次数增强特征的表达能力;最后,在CUFS人脸数据库上进行实验确定模型参数。实验表明,反卷积跳跃连接的重复次数为5次时,图像翻译的用户调研满意评价指标达到42%,图像翻译的质量达到最优。  相似文献   

5.
为了提高图像超分辨率重建的效果,该文将注意力机制引入多级残差网络(Multi-level Residual Attention Network,MRAN)作为CycleGAN的重建网络,提出了基于循环生成对抗网络(CycleGAN)的超分辨率重建模型MRA-GAN.MRA-GAN模型中重建网络负责将低分辨率(LR)图像...  相似文献   

6.
图像间的风格迁移是一类将图片在不同领域进行转换的方法。随着生成式对抗网络在深度学习中的快速发展,其在图像风格迁移领域中的应用被日益关注。但经典算法存在配对训练数据较难获取,生成图片效果差的缺点。该文提出一种改进循环生成式对抗网络(CycleGAN++),取消了环形网络,并在图像生成阶段将目标域与源域的先验信息与相应图片进行纵深级联;优化了损失函数,采用分类损失代替循环一致损失,实现了不依赖训练数据映射的图像风格迁移。采用CelebA和Cityscapes数据集进行实验评测,结果表明在亚马逊劳务平台感知研究(AMT perceptual studies)与全卷积网络得分(FCN score)两个经典测试指标中,该文算法比CycleGAN, IcGAN, CoGAN, DIAT等经典算法取得了更高的精度。  相似文献   

7.
Event-based cameras generate sparse event streams and capture high-speed motion information, however, as the time resolution increases, the spatial resolution will decrease sharply. Although the generative adversarial network has achieved remarkable results in traditional image restoration, directly using it for event inpainting will obscure the fast response characteristics of the event camera, and the sparsity of the event stream is not fully utilized. To tackle the challenges, an event-inpainting network is proposed. The number and structure of the network are redesigned to adapt to the sparsity of events, and the dimensionality of the convolution is increased to retain more spatiotemporal information. To ensure the time consistency of the inpainting image, an event sequence discriminator is added. The tests on the DHP19 and MVSEC datasets were performed. Compared with the state-of-the-art traditional image inpainting method, the method in this paper reduces the number of parameters by 93.5% and increases the inference speed by 6 times without reducing the quality of the restored image too much. In addition, the human pose estimation experiment also revealed that this model can fill in human motion information in high frame rate scenes.  相似文献   

8.
由于现有基于深度网络的图像增强模型直接学习退化图像与清晰图像之间的映射函数,忽略了观测模型保真项的约束,导致恢复的图像存在虚假纹理和细节丢失。本文提出了一种用于红外图像增强的改进深度网络,该网络将深度学习网络嵌入到一个迭代的图像增强任务中,通过图像增强模块和反投影模块交错优化,实现数据一致性约束。本文提出的深度网络不仅可以利用深度特征学习先验,还可以利用观测模型的一致性先验。实验结果表明,本文提出的算法可以在图像去噪和去模糊任务上获得非常有竞争力的重建结果,在低对比度区域也能获得清晰的重建效果。  相似文献   

9.
基于深度神经网络的多源图像内容自动分析与目标识别方法近年来不断取得新的突破,并逐步在智能安防、医疗影像辅助诊断和自动驾驶等多个领域得到广泛部署。然而深度神经网络的对抗脆弱性给其在安全敏感领域的部署带来巨大安全隐患。对抗鲁棒性的有效提升方法是采用最大化网络损失的对抗样本重训练深度网络,但是现有的对抗训练过程生成对抗样本时需要类别标记信息,并且会大大降低无攻击数据集上的泛化性能。本文提出一种基于自监督对比学习的深度神经网络对抗鲁棒性提升方法,充分利用大量存在的无标记数据改善模型在对抗场景中的预测稳定性和泛化性。采用孪生网络架构,最大化训练样本与其无监督对抗样本间的多隐层表征相似性,增强模型的内在鲁棒性。本文所提方法可以用于预训练模型的鲁棒性提升,也可以与对抗训练相结合最大化模型的“预训练+微调”鲁棒性,在遥感图像场景分类数据集上的实验结果证明了所提方法的有效性和灵活性。   相似文献   

10.
基于生成对抗网络的无监督域适应分类模型   总被引:1,自引:0,他引:1       下载免费PDF全文
王格格  郭涛  余游  苏菡 《电子学报》2020,48(6):1190-1197
生成适应模型利用生成对抗网络实现模型结构,并在领域适应学习上取得了突破.但其部分网络结构缺少信息交互,且仅使用对抗学习不足以完全减小域间距离,从而使分类精度受到影响.为此,提出一种基于生成对抗网络的无监督域适应分类模型(Unsupervised Domain Adaptation classification model based on GAN,UDAG).该模型通过联合使用生成对抗网络和多核最大均值差异度量准则优化域间差异,并充分利用无监督对抗训练及监督分类训练之间的信息传递以学习源域分布和目标域分布之间的共享特征.通过在四种域适应情况下的实验结果表明,UDAG模型学习到更优的共享特征嵌入并实现了域适应图像分类,且分类精度有明显提高.  相似文献   

11.
Impressive progress has been made recently in image-to-image translation using generative adversarial networks (GANs). However, existing methods often fail in translating source images with noise to target domain. To address this problem, we joint image-to-image translation with image denoising and propose an enhanced generative adversarial network (EGAN). In particular, built upon pix2pix, we introduce residual blocks in the generator network to capture deeper multi-level information between source and target image distribution. Moreover, a perceptual loss is proposed to enhance the performance of image-to-image translation. As demonstrated through extensive experiments, our proposed EGAN can alleviate effects of noise in source images, and outperform other state-of-the-art methods significantly. Furthermore, we experimentally indicate that the proposed EGAN is also effective when applied to image denoising.  相似文献   

12.
The inconsistency caused by different factors, such as different camera imaging methods, complex imaging environments, and changes in light, present a huge challenge to person re-identification (re-ID). Unsupervised domain adaptation (UDA) can solve the inconsistency issue to a certain extent, but different datasets may not have any overlapping of people’s identities. Therefore, it is necessary to pay attention to people’s identities in solving domain-dissimilarity. A camera imaging style transformation with preserved self-similarity and domain-dissimilarity (CSPSD) is proposed to solve the cross-domain issue in person re-ID. First, CycleGAN is applied to determine the style conversion between source and target domains. Intra-domain identity constraints are used to maintain identity consistency between source and target domains during the image style transformation process. Maximum mean difference (MMD) is used to reduce the difference in feature distribution between source and target domains. Then, a one-to-n mapping method is proposed to achieve the mapping between positive pairs and distinguish negative pairs. Any sample image from the source domain and its transformed image or a transformed image with the same identity information compose a positive pair. The transformed image and any image from the target domain compose a negative pair. Next, a circle loss function is used to improve the learning speed of positive and negative pairs. Finally, the proposed CSPSD that can effectively reduce the difference between domains and an existing feature learning network work together to learn a person re-ID model. The proposed method is applied to three public datasets, Market-1501, DukeMTMC-reID, and MSMT17. The comparative experimental results confirm the proposed method can achieve highly competitive recognition accuracy in person re-ID.  相似文献   

13.
Haze is an aggregation of very fine, widely dispersed, solid and/or liquid particles suspended in the atmosphere. In this paper, we propose an end-to-end network for single image dehazing, which enhances the CycleGAN model by introducing a transformer architecture within the generator, which is specific for haze removal. The proposed model is trained in an unpaired fashion with clear and hazy images altogether and does not require pairs of hazy and corresponding ground-truth clear images. Furthermore, the proposed model does not depend on estimating the parameters of the atmospheric scattering model. Rather, it uses a K-estimation module as the generator’s transformer for complete end-to-end modeling. The feature transformer introduced in the proposed generator model transforms the encoded features into desired feature space and then feeds them into the CycleGAN decoder to create a clear image. In the proposed model we further modified the cycle consistency loss to include the SSIM loss along with pixel-wise mean loss to produce a new loss function specific for the reconstruction task, which enhances the performance of the proposed model. The model performs well even on the high-resolution images provided in the NTIRE 2019 challenge dataset for single image dehazing. Further, we perform experiments on NYU-Depth and reside beta datasets. Results of our experiments show the efficacy of the proposed approach compared to the state-of-the-art in removing the haze from the input image.  相似文献   

14.
高分辨率遥感图像的语义分割问题是目前遥感图像处理领域中的研究热点之一。传统的有监督分割方法需要大量的标记数据,而标记过程又较为困难和耗时。针对这一问题,提出一种基于生成式对抗网络的半监督高分辨率遥感图像语义分割方法,只需要少量样本标签即可得到较好的分割结果。该方法为分割网络添加全卷积形式的辅助对抗网络,以助于保持高分辨率遥感图像分割结果中的标签连续性;更进一步,提出一种新颖的能够进行注意力选择的对抗损失,以解决分割结果较好时判别器约束的分割网络更新过程中存在的难易样本不均衡问题。在ISPRS Vaihingen 2D语义标记挑战数据集上的实验结果表明,与现有其它语义分割方法相比,所提出方法能够较大幅度地提高遥感图像的语义分割精度。  相似文献   

15.
In this paper, we propose a solution to transform spatially variant blurry images into the photo-realistic sharp manifold. Image deblurring task is valuable and challenging in computer vision. However, existing learning-based methods cannot produce images with clear edges and fine details, which exhibit significant challenges for generated-based loss functions used in existing methods. Instead of only designing architectures and loss functions for generators, we propose a generative adversarial network (GAN) framework based on an edge adversarial mechanism and a partial weight sharing network. In order to propel the entire network to learn image edges information consciously, we propose an edge reconstruction loss function and an edge adversarial loss function to restrict the generator and the discriminator respectively. We further introduce a partial weight sharing structure, the sharp features from clean images encourage the recovery of image details of deblurred images. The proposed partial weight sharing structure improves image details effectively. Experimental results show that our method is able to generate photo-realistic sharp images from real-world blurring images and outperforms state-of-the-art methods.  相似文献   

16.
红外警戒系统、红外成像制导导弹等军事装备在进行性能评估和模拟训练过程中都需要大量红外仿真图像,但目前红外仿真软件普遍存在生成红外仿真图像逼真度差、软件普适性不好等问题,且国外技术封锁造成我国红外仿真软件发展缓慢。因此,针对国内可见光图像仿真技术日趋成熟的现状,为提高红外仿真图像质量,本文提出了一种采用循环生成对抗网络、由可见光图像生成红外仿真图像的方法,并通过实验验证该算法是有效可行的。该算法首先通过区域生长算法从采集的可见光图像中提取水上目标,建立了水上目标可见光图像生成红外图像的训练数据集;然后利用训练好的网络生成红外仿真图像。测试实验表明,采用这种方法所生成的水上目标红外仿真图像视觉效果接近真实红外图像,可实际应用于海军红外军事装备模拟试验和训练系统。  相似文献   

17.
黄攀  杨小冈  卢瑞涛  常振良  刘闯 《红外与激光工程》2021,50(12):20210281-1-20210281-10
针对红外舰船目标图像数据少、获取难度高等问题,结合图像的几何变化以及金字塔生成对抗网络的特征拟合,提出一种几何空间与特征空间联合的红外舰船目标图像数据增强方法。首先,利用基于几何空间的几何变换、混合图像及随机擦除等图像变换方法对红外舰船目标图像进行增强;然后,根据红外舰船图像特点,改进金字塔生成对抗网络(SinGAN),在生成器引入In-SE通道间注意力机制模块,增强小感受野特征表达,使其更适合用于红外舰船目标;最后,在数据集层面联合基于几何空间的几何数据变换和基于特征空间的生成对抗网络两种方法,完成对原始数据集的数据增强。结果表明:以YOLOv3、SSD、R-FCN和Faster R-CNN目标检测算法为基准模型,开展红外舰船图像数据增强仿真实验,采用增强数据训练的网络模型的舰船目标检测平均精度(mAP)均提高了10%左右,验证了所提方法在小样本红外舰船图像数据增强方面的可行性,为提高红外舰船目标检测算法提供了数据基础。  相似文献   

18.
In order to improve the visual appearance of defogged of aerial images, in this work, a novel defogging algorithm based on conditional generative adversarial network is proposed. More specifically, the training process is carried out through an end-to-end trainable deep neural network. In detail, we upgrade the traditional adversarial loss function by incorporating an L1-regularized gradient to encode a rich set of detailed visual information inside each aerial image. In practice, to our best knowledge, existing image quality assessment algorithms might have deviation and supersaturation distortion on aerial images. To alleviate this problem, we leverage a random forest classification model to learn the mapping relationship between aerial image features and the quality ranking results. Subsequently, we transform the objective of defogged image quality assessment into a classification problem. Comprehensive experimental results on our compiled fogged aerial images quality data set have clearly demonstrated the effectiveness of our proposed algorithm.  相似文献   

19.
提出了一种基于双鉴别器相对循环一致性生成对抗网络(DDR-CycleGAN)的红外图像数据生成方法。针对双鉴别器监督机制易出现的过度优化造成的性能下降问题,该方法在双鉴别器循环一致性生成对抗网络(DD-CycleGAN)中加入了相对概率的思想,采用鉴别器鉴别图像的相对真实概率替代绝对真实概率,使得生成图片更加接近真实图片。本文方法采用FLIR数据集进行训练和测试,实验结果表明本文方法相比DD-CycleGAN在可见光图像生成红外图像的图像质量上,峰值信噪比提高了3.91%,FID(Frechet Inception Distance score)降低了3.81%。  相似文献   

20.
近年来,卷积神经网络(CNN)已广泛应用于合成孔径雷达(SAR)目标识别。由于SAR目标的训练数据集通常较小,基于CNN的SAR图像目标识别容易产生过拟合问题。生成对抗网络(GAN)是一种无监督训练网络,通过生成器和鉴别器两者之间的博弈,使生成的图像难以被鉴别器鉴别出真假。本文提出一种基于改进的卷积神经网络(ICNN)和改进的生成对抗网络(IGAN)的SAR目标识别方法,即先用训练样本对IGAN进行无监督预训练,再用训练好的IGAN鉴别器参数初始化ICNN,然后用训练样本对ICNN微调,最后用训练好的ICNN对测试样本进行分类。MSTAR实验结果表明,提出的方法不仅能够在训练样本数降至原样本数30%的情况下获得高达96.37%的识别率,而且该方法比直接采用ICNN的方法具有更强的抗噪声能力。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号