首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
高分辨率遥感图像的语义分割问题是目前遥感图像处理领域中的研究热点之一。传统的有监督分割方法需要大量的标记数据,而标记过程又较为困难和耗时。针对这一问题,提出一种基于生成式对抗网络的半监督高分辨率遥感图像语义分割方法,只需要少量样本标签即可得到较好的分割结果。该方法为分割网络添加全卷积形式的辅助对抗网络,以助于保持高分辨率遥感图像分割结果中的标签连续性;更进一步,提出一种新颖的能够进行注意力选择的对抗损失,以解决分割结果较好时判别器约束的分割网络更新过程中存在的难易样本不均衡问题。在ISPRS Vaihingen 2D语义标记挑战数据集上的实验结果表明,与现有其它语义分割方法相比,所提出方法能够较大幅度地提高遥感图像的语义分割精度。  相似文献   

2.
近年来,卷积神经网络(CNN)已广泛应用于合成孔径雷达(SAR)目标识别。由于SAR目标的训练数据集通常较小,基于CNN的SAR图像目标识别容易产生过拟合问题。生成对抗网络(GAN)是一种无监督训练网络,通过生成器和鉴别器两者之间的博弈,使生成的图像难以被鉴别器鉴别出真假。本文提出一种基于改进的卷积神经网络(ICNN)和改进的生成对抗网络(IGAN)的SAR目标识别方法,即先用训练样本对IGAN进行无监督预训练,再用训练好的IGAN鉴别器参数初始化ICNN,然后用训练样本对ICNN微调,最后用训练好的ICNN对测试样本进行分类。MSTAR实验结果表明,提出的方法不仅能够在训练样本数降至原样本数30%的情况下获得高达96.37%的识别率,而且该方法比直接采用ICNN的方法具有更强的抗噪声能力。  相似文献   

3.
显著区域检测可应用在对象识别、图像分割、视 频/图像压缩中,是计算机视觉领域的重要研究主题。然而,基于不 同视觉显著特征的显著区域检测法常常不能准确地探测出显著对象且计算费时。近来,卷积 神经网络模型在图像分析和处理 领域取得了极大成功。为提高图像显著区域检测性能,本文提出了一种基于监督式生成对抗 网络的图像显著性检测方法。它 利用深度卷积神经网络构建监督式生成对抗网络,经生成器网络与鉴别器网络的不断相互对 抗训练,使卷积网络准确学习到 图像显著区域的特征,进而使生成器输出精确的显著对象分布图。同时,本文将网络自身误 差和生成器输出与真值图间的 L1距离相结合,来定义监督式生成对抗网络的损失函数,提升了显著区域检测精度。在MSRA 10K与ECSSD数据库上的实 验结果表明,本文方法 分别获得了94.19%与96.24%的准确率和93.99%与90.13%的召回率,F -Measure值也高达94.15%与94.76%,优于先 前常用的显著性检测模型。  相似文献   

4.
针对目前石化危险品装车过程中海量监控视频图像人为处理效率低下、模糊图像识别率低等问题,提出一种基于生成式对抗网络(GAN)和卷积神经网络(CNN)与极限学习机(ELM)相结合的监控模糊图像智能修复及检测方法.首先,使用深度学习网络作为 目标检测框架,利用GAN网络中生成器与判别器间的零和博弈对模糊图像进行复原,得到清晰完整的作业图像;其次,利用CNN自适应学习图像特征的能力,对修复后的图像进行自主特征提取;最后,将提取的图像特征输入ELM分类器中进行目标识别与分类,判断作业过程是否存在违规行为.试验结果表明:所提方法图像修复速度快,视觉效果自然,且目标识别准确率高,具有很好的泛化能力.  相似文献   

5.
针对图像采集和传输过程中所产生噪声导致后续图像处理能力下降的问题,提出基于生成对抗网络(GAN)的多通道图像去噪算法。所提算法将含噪彩色图像分离为RGB三通道,各通道基于具有相同架构的端到端可训练的GAN实现去噪。GAN生成网络基于U-net衍生网络以及残差块构建,从而可参考低级特征信息以有效提取深度特征进而避免丢失细节信息;判别网络则基于全卷积网络构造,因而可获得像素级分类从而提升判别精确性。此外,为改善去噪能力且尽可能保留图像细节信息,所构建去噪网络基于对抗损失、视觉感知损失和均方误差损失这3类损失度量构建复合损失函数。最后,利用算术平均方法融合三通道输出信息以获得最终去噪图像。实验结果表明,与主流算法相比,所提算法可有效去除图像噪声,且可较好地恢复原始图像细节。  相似文献   

6.
于贺  余南南 《信号处理》2019,35(12):2045-2054
针对深度学习中数据增强的方法, 改进生成式对抗网络 (GAN,Generative adversarial networks)模型,形成一种快速收敛生成式对抗网络,能够克服 GAN 训练过程不稳定、收敛速度缓慢容易发生模式崩溃等问题。采用在判别器中使用多尺寸卷积,加强判别器的特征提取能力;在生成器中添加残差单元的方法,使得生成器可以快速拟合真实数据的分布;同时对判别器进行预训练的策略,有利于提高生成器前期训练稳定性和加快训练过程。运用 CIFAR-10 标准数据集进行实验,与几种基于 GAN 的模型对比,证实本文的改进算法效果较好,图像质量和多样性更优。利用本文提出的改进算法用于美国 NIH 临床数据库的胸部 X 射线数据集,生成扩充样本,经图灵测试证实了算法的有效性。   相似文献   

7.
Conventional face image generation using generative adversarial networks (GAN) is limited by the quality of generated images since generator and discriminator use the same backpropagation network. In this paper, we discuss algorithms that can improve the quality of generated images, that is, high-quality face image generation. In order to achieve stability of network, we replace MLP with convolutional neural network (CNN) and remove pooling layers. We conduct comprehensive experiments on LFW, CelebA datasets and experimental results show the effectiveness of our proposed method.  相似文献   

8.
程小龙  胡煦航  张斌 《激光与红外》2023,53(12):1928-1934
渗漏水是盾构隧道安全危害最大的病害之一,对盾构隧道渗漏水快速精准的检测,是有效控制及整治盾构隧道渗漏水的基础。现有的渗漏水检测方法在自动化程度方面均取得一定的成效,但存在数据采集效率低、现场采集环境要求高、训练数据样本量大等问题。针对上述问题,文章将移动LiDAR采集的盾构隧道强度图像作为数据源,提出了基于生成对抗网络的盾构隧道渗漏水检测方法,从现有的生成对抗网络V GAN模型出发,在标注少量样本的基础上,建立了Dense块作为编码器,残差块作为解码器的Unet模型作为生成器网络,运用改进的深度残差Unet(Improve ResUnet)作为判别器网络,组成DRUnet IRUnet GAN生成对抗网络用于盾构隧道LiDAR强度图像渗漏水检测。实验结果表明,当输入500张、200张、100张少量样本时,文章构建的DRUnet IRUnet GAN生成对抗网络能够达到优于V GAN的盾构隧道强度图像渗漏水检测效果,表明了所改进的网络具有良好的性能。  相似文献   

9.
黄梦涛  高娜  刘宝 《红外技术》2022,44(1):41-46
原始生成对抗网络(generative adversarial network,GAN)在训练过程中容易产生梯度消失及模式崩溃的问题,去模糊效果不佳.由此本文提出双判别器加权生成对抗网络(dual discriminator weighted generative adversarial network,D2WGAN)...  相似文献   

10.
与具有大量标注数据的光学图像相比,合成孔径雷达(Synthetic Aperture Radar,SAR)图像缺乏足够的标记样本,限制了监督学习的SAR目标识别算法的性能.而无监督识别方法又难以满足实际需求,因此本文提出了基于自注意力特征融合的半监督生成对抗网路.首先,在构建生成器和判别器时引入自注意力层,克服卷积算子...  相似文献   

11.
In this paper, we propose a hybrid model aiming to map the input noise vector to the label of the generated image by the generative adversarial network (GAN). This model mainly consists of a pre-trained deep convolution generative adversarial network (DCGAN) and a classifier. By using the model, we visualize the distribution of two-dimensional input noise, leading to a specific type of the generated image after each training epoch of GAN. The visualization reveals the distribution feature of the input noise vector and the performance of the generator. With this feature, we try to build a guided generator (GG) with the ability to produce a fake image we need. Two methods are proposed to build GG. One is the most significant noise (MSN) method, and the other utilizes labeled noise. The MSN method can generate images precisely but with less variations. In contrast, the labeled noise method has more variations but is slightly less stable. Finally, we propose a criterion to measure the performance of the generator, which can be used as a loss function to effectively train the network.  相似文献   

12.
孙浩  陈进  雷琳  计科峰  匡纲要 《雷达学报》2021,10(4):571-594
近年来,以卷积神经网络为代表的深度识别模型取得重要突破,不断刷新光学和SAR图像场景分类、目标检测、语义分割与变化检测等多项任务性能水平。然而深度识别模型以统计学习为主要特征,依赖大规模高质量训练数据,只能提供有限的可靠性能保证。深度卷积神经网络图像识别模型很容易被视觉不可感知的微小对抗扰动欺骗,给其在医疗、安防、自动驾驶和军事等安全敏感领域的广泛部署带来巨大隐患。该文首先从信息安全角度分析了基于深度卷积神经网络的图像识别系统潜在安全风险,并重点讨论了投毒攻击和逃避攻击特性及对抗脆弱性成因;其次给出了对抗鲁棒性的基本定义,分别建立对抗学习攻击与防御敌手模型,系统总结了对抗样本攻击、主被动对抗防御、对抗鲁棒性评估技术的研究进展,并结合SAR图像目标识别对抗攻击实例分析了典型方法特性;最后结合团队研究工作,指出存在的开放性问题,为提升深度卷积神经网络图像识别模型在开放、动态、对抗环境中的鲁棒性提供参考。   相似文献   

13.
It is becoming increasingly easier to obtain more abundant supplies for hyperspectral images ( HSIs). Despite this, achieving high resolution is still critical. In this paper, a method named hyperspectral images super-resolution generative adversarial network ( HSI-RGAN ) is proposed to enhance the spatial resolution of HSI without decreasing its spectral resolution. Different from existing methods with the same purpose, which are based on convolutional neural networks ( CNNs) and driven by a pixel-level loss function, the new generative adversarial network (GAN) has a redesigned framework and a targeted loss function. Specifically, the discriminator uses the structure of the relativistic discriminator, which provides feedback on how much the generated HSI looks like the ground truth. The generator achieves more authentic details and textures by removing the place of the pooling layer and the batch normalization layer and presenting smaller filter size and two-step upsampling layers. Furthermore, the loss function is improved to specially take spectral distinctions into account to avoid artifacts and minimize potential spectral distortion, which may be introduced by neural networks. Furthermore, pre-training with the visual geometry group (VGG) network helps the entire model to initialize more easily. Benefiting from these changes, the proposed method obtains significant advantages compared to the original GAN. Experimental results also reveal that the proposed method performs better than several state-of-the-art methods.  相似文献   

14.
In this paper, we propose a solution to transform spatially variant blurry images into the photo-realistic sharp manifold. Image deblurring task is valuable and challenging in computer vision. However, existing learning-based methods cannot produce images with clear edges and fine details, which exhibit significant challenges for generated-based loss functions used in existing methods. Instead of only designing architectures and loss functions for generators, we propose a generative adversarial network (GAN) framework based on an edge adversarial mechanism and a partial weight sharing network. In order to propel the entire network to learn image edges information consciously, we propose an edge reconstruction loss function and an edge adversarial loss function to restrict the generator and the discriminator respectively. We further introduce a partial weight sharing structure, the sharp features from clean images encourage the recovery of image details of deblurred images. The proposed partial weight sharing structure improves image details effectively. Experimental results show that our method is able to generate photo-realistic sharp images from real-world blurring images and outperforms state-of-the-art methods.  相似文献   

15.
学术论文推荐旨在为用户提供个性化的论文资源,针对协同过滤方法面临数据高度稀疏和缺乏负样本的问题,提出了一种融合细粒度语义特征的学术论文对抗推荐模型——TAGAN(title and abstract GAN)。首先,基于具有语义特征的标题和摘要,使用卷积神经网络(CNN)提取标题的全局特征,并构建一个双层的长短期记忆(LSTM)网络分别对摘要的单词序列和语句序列建模,同时,引入注意力机制将标题和摘要进行语义上的关联。然后,将论文的语义特征融入基于生成对抗网络(GAN)的推荐框架中并进行训练,其生成模型会拟合用户的兴趣偏好,能有效替代负采样过程。最后,通过在公开数据集上的实验对比,TAGAN在各个指标上都优于基线模型,验证了TAGAN的有效性。  相似文献   

16.
针对不同谱段图像获取代价不同的问题,提出一种基于生成对抗网络的图像转换方法。转换过程以肉眼可分辨范围内图像轮廓不变为出发点。首先,通过成对的训练数据对生成器和判别器进行交替训练,不断对损失函数进行优化,直到模型达到纳什平衡。然后用测试数据对上述训练好的模型进行检测,查看转换效果,并从主观观察和客观上计算平均绝对误差和均方误差角度评价转换效果。通过上述过程最终实现不同谱段图像之间的转换。其中,生成器借鉴U-Net架构;判别器采用传统卷积神经网络架构;损失函数方面增加L1损失来保证图像转换前后高、低频特征的完整性。以红外图像与可见光图像之间的转换为例进行实验,结果表明,通过本文设计的生成对抗网络,可以较好地实现红外图像与可见光图像之间的转换。  相似文献   

17.
基于卷积神经网络的图像分类算法综述   总被引:1,自引:0,他引:1       下载免费PDF全文
杨真真  匡楠  范露  康彬 《信号处理》2018,34(12):1474-1489
随着大数据的到来以及计算能力的提高,深度学习(Deep Learning, DL)席卷全球。传统的图像分类方法难以处理庞大的图像数据以及无法满足人们对图像分类精度和速度上的要求,基于卷积神经网络(Convolutional Neural Network, CNN)的图像分类方法冲破了传统图像分类方法的瓶颈,成为目前图像分类的主流算法,如何有效利用卷积神经网络来进行图像分类成为国内外计算机视觉领域研究的热点。本文在对卷积神经网络进行系统的研究并且深入研究卷积神经网络在图像处理中的应用后,给出了基于卷积神经网络的图像分类所采用的主流结构模型、优缺点、时间/空间复杂度、模型训练过程中可能遇到的问题和相应的解决方案,与此同时也对基于深度学习的图像分类拓展模型的生成式对抗网络和胶囊网络进行介绍;然后通过仿真实验验证了在图像分类精度上,基于卷积神经网络的图像分类方法优于传统图像分类方法,同时综合比较了目前较为流行的卷积神经网络模型之间的性能差异并进一步验证了各种模型的优缺点;最后对于过拟合问题、数据集构建方法、生成式对抗网络及胶囊网络性能进行相关实验及分析。   相似文献   

18.
This paper proposes a novel model for saliency detection using the adversarial learning networks, in which the generator is used to generate the saliency map and the discriminator is deployed to guide the training process of overall network. Concretely, the training procedure of our model consists of three steps including the training of generator, the training of discriminator, and the training throughout the overall network. The key point of training process lies in the discriminator, which is designed to provide the feedback information for the acceleration of the generator and the refinement of saliency map. Therefore, during the training stage of overall network, the output of the generator, i.e. the coarse saliency map, is fed into the discriminator, yielding the corresponding feedback information. Following this way, we can obtain the final generator with a higher performance. For testing, the obtained generator is employed to perform saliency detection. Extensive experiments on four challenging saliency detection datasets show that our model not only achieves the favorable performance against the state-of-the-art saliency models, but also possesses the faster convergence speed when training the proposed model.  相似文献   

19.
易拓源  户盼鹤  刘振 《信号处理》2023,39(2):323-334
图像超分辨是解决ISAR欺骗干扰中由于模型样本不完备导致难以对大带宽ISAR实现高逼真假目标模拟的重要手段。利用生成对抗网络(GAN)可通过端到端映射实现ISAR图像的超分辨,然而,当测试输入样本与训练输入样本分辨率差异较大时,超分辨图像中会出现伪散射点从而导致目标失真。考虑到循环生成对抗网络(CycleGAN)对输入样本差异适应性较好,本文提出了一种基于改进CycleGAN的ISAR欺骗干扰超分辨样本生成方法,分别从损失函数、优化过程、判别器结构三方面对CycleGAN网络结构进行改进,加快了网络的收敛速度,同时对于输入分辨率差异较大的ISAR图像泛化性能更好。利用暗室测量数据验证了所提方法的有效性,与GAN方法相比,对于训练输入样本分辨率差异较大的测试输入样本,生成的超分辨样本散射点位置与真实数据具有更好的匹配效果。  相似文献   

20.
Underwater images play an essential role in acquiring and understanding underwater information. High-quality underwater images can guarantee the reliability of underwater intelligent systems. Unfortunately, underwater images are characterized by low contrast, color casts, blurring, low light, and uneven illumination, which severely affects the perception and processing of underwater information. To improve the quality of acquired underwater images, numerous methods have been proposed, particularly with the emergence of deep learning technologies. However, the performance of underwater image enhancement methods is still unsatisfactory due to lacking sufficient training data and effective network structures. In this paper, we solve this problem based on a conditional generative adversarial network (cGAN), where the clear underwater image is achieved by a multi-scale generator. Besides, we employ a dual discriminator to grab local and global semantic information, which enforces the generated results by the multi-scale generator realistic and natural. Experiments on real-world and synthetic underwater images demonstrate that the proposed method performs favorable against the state-of-the-art underwater image enhancement methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号