首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
由于低照度图像具有对比度低、细节丢失严重、噪声大等缺点,现有的目标检测算法对低照度图像的检测效果不理想.为此,本文提出一种结合空间感知注意力机制和多尺度特征融合(Spatial-aware Attention Mechanism and Multi-Scale Feature Fusion,SAM-MSFF)的低照度目标检测方法 .该方法首先通过多尺度交互内存金字塔融合多尺度特征,增强低照度图像特征中的有效信息,并设置内存向量存储样本的特征,捕获样本之间的潜在关联性;然后,引入空间感知注意力机制获取特征在空间域的长距离上下文信息和局部信息,从而增强低照度图像中的目标特征,抑制背景信息和噪声的干扰;最后,利用多感受野增强模块扩张特征的感受野,对具有不同感受野的特征进行分组重加权计算,使检测网络根据输入的多尺度信息自适应地调整感受野的大小.在ExDark数据集上进行实验,本文方法的平均精度(mean Average Precision,mAP)达到77.04%,比现有的主流目标检测方法提高2.6%~14.34%.  相似文献   

2.
复杂环境下的低照度图像具有光照分布不均、多光源叠加作用等特点,导致增强后的图像真实性不足、图像噪声增加等问题。针对低照度图像的特点,提出了一种基于深度注意力机制的低照度图像增强方法。设计生成对抗全局自注意力低照度增强网络(GSLE-GAN)以实现低照度图像的增强。在生成器中设计并使用注意力模块,提高模型对于光照分布特点的提取能力以及生成图像的真实性,采用局部鉴别器与全局鉴别器共同作用的方式使图像具有更丰富的细节信息,使用非配对数据及对模型进行训练,以提升模型的鲁棒性并进一步保证生成图像的真实性。通过对比实验,证明了文中所提方法的优越性,并在目标检测任务中证明了方法的有效性。  相似文献   

3.
低照度条件下拍摄的照片往往存在亮度低、颜色失真、噪声高、细节退化等多重耦合问题,因此低照度图像增强是一个具有挑战性的任务.现有基于深度学习的低照度图像增强方法通常聚焦于对亮度和色彩的提升,导致增强图像中仍然存在噪声等缺陷.针对上述问题,本文提出了一种基于任务解耦的低照度图像增强方法,根据低照度图像增强任务对高层和低层特征的不同需求,将该任务解耦为亮度与色彩增强和细节重构两组任务,进而构建双分支低照度图像增强网络模型(Two-Branch Low-light Image Enhancement Network,TBLIEN).其中,亮度与色彩增强分支采用带全局特征的U-Net结构,提取深层语义信息改善亮度与色彩;细节重构分支采用保持原始分辨率的全卷积网络实现细节复原和噪声去除.此外,在细节重构分支中,本文提出一种半双重注意力残差模块,能在保留上下文特征的同时通过空间和通道注意力强化特征,从而实现更精细的细节重构.在合成和真实数据集上的广泛实验表明,本文模型的性能超越了当前先进的低照度图像增强方法,并具有更好的泛化能力,且可适用于水下图像增强等其他图像增强任务.  相似文献   

4.
由于在低照度场景下获取的图像具有亮度弱、对比度低、噪声多和细节丢失等特点,使用现有的检测模型对低照度图像进行目标检测会出现定位不准确和分类错误,从而导致最终的检测精度偏低.针对以上现象,本文提出了一种基于Night-YOLOX的低照度目标检测方法 .该方法首先设计了一个低级特征聚集模块(Low-level Feature Gathering Module,LFGM)与主干网络合并.在低照度场景下捕获更多有效的低级特征有利于定位目标,该模块通过聚集浅层特征图中具有判别性的低级特征并送入高级特征图和深层卷积阶段中,以补偿在对低照度图像进行特征提取过程中边缘、轮廓和纹理等低级特征的缺失.然后,设计了一种注意力引导块(Attention Guidance Block,AGB)嵌入检测模型的颈部结构,从而减少低照度图像中噪声干扰的影响,引导检测模型推断出特征图中完整的对象区域范围并提取更多有用的对象特征信息,以提高目标分类的准确性.最后,在真实低照度图像数据集ExDark上进行实验,结果表明所提出的Night-YOLOX相比于其它主流的目标检测方法,在低照度场景下具有更好的检测性能.  相似文献   

5.
针对地下空间低照度图像色彩偏暗、亮度低且分布不均、增强后图像色偏和噪声高等问题,研究提出了融合非物理模型的改进AM-RetinexNet图像增强算法。该算法将RGB图像转换成HSV分量,利用HSV空间相互独立性实现图像亮度增强和色彩增强处理,其中S分量利用V分量提取的相关信息进行自适应调整,V分量采用融合直方均衡化与注意力机制优化的RetinexNet进行照度分量增强处理;将处理后HSV分量转化成RGB图像,并对图像进行自适应色彩恢复,得到照度增强图像。对比实验表明,在图像的细节处理、亮度整体增强处理、图像降噪和色彩视觉修正等方面该方法表现较好,测试指标中平均互信息(MI)、标准差(STD)、结构相似性(SSIM)、平均梯度(AG)、空间频率(SF)和峰值信噪比(PSNR)最佳,均值分别可达到6.18,70.62,0.56,13.29,36.53,39.22。  相似文献   

6.
针对低照度图像增强模型中的亮度提升、噪声抑制以及保持纹理颜色一致性等难点问题,该文提出一种基于移位窗口自注意力机制的低照度图像增强方法。该文以U型结构为基本框架,以移位窗口多头自注意力模型为基础,构建了由编码器、解码器以及跳跃连接组成的图像增强网络。该网络将自注意力机制的特征提取优势应用到低照度图像增强领域,建立图像特征信息之间的长期依赖关系,能够有效获取全局特征。将所提方法与当前流行的算法进行定量和定性对比试验,主观感受上,该文方法显著提升了图像亮度,抑制图像噪声效果明显并较好地保持了纹理细节和颜色信息;在峰值信噪比(PSNR)、结构相似性(SSIM)和图像感知相似度(LPIPS)等客观指标方面,该方法较其他方法的最优值分别提高了0.35 dB, 0.041和0.031。实验结果表明,该文所提方法能够有效提升低照度图像的主观感受质量和客观评价指标,具有一定的应用价值。  相似文献   

7.
成像设备在暗光照环境下会出现对比度不高、图像细节信息丢失、颜色失真等问题,这会对视频监控、智能交通、人脸识别等应用场景产生巨大干扰.为了解决这一问题,本文提出了一种融合了注意力机制的的复合残差网络来实现对低照度图像的增强.该算法首先通过色彩空间上的转换(RGB-HSV)将亮度分量V放入构造的神经网络中,然后神经网络通过...  相似文献   

8.
肖鹏  王红茹 《激光杂志》2022,43(4):114-119
针对局部低照度导致的水下图像细节丢失以及使用现有的水下图像整体增强方法产生的增强过度现象,提出一种基于改进Retinex-Net的水下图像增强方法.通过基于HSV空间颜色阈值的图像二值化获取图像任意位置的低照度区域;利用卷积神经网络对图像的低照度区域学习与分解,并对分解结果进行端对端训练;在增强网络中运用U-Net,构...  相似文献   

9.
一种基于U-Net生成对抗网络的低照度图像增强方法   总被引:3,自引:0,他引:3       下载免费PDF全文
江泽涛  覃露露 《电子学报》2020,48(2):258-264
在低照度环境下采集的图像具有低信噪比、低对比度及低分辨率等特点,导致图像难以识别利用.为了提升低照度图像的质量,本文提出一种基于U-Net生成对抗网络的低照度图像增强方法.首先利用U-Net框架实现生成对抗网络中的生成网络,然后利用该生成对抗网络学习从低照度图像到正常照度图像的特征映射,最终实现低照度图像的照度增强.实验结果表明,与主流算法相比,本文提出的方法能够更有效的提升低照度图像的亮度与对比度.  相似文献   

10.
针对低照度监控图像存在大量噪声的情况,按照视觉心理感知特性选择HSV颜色空间,在此空间提出了一种新的低照度彩色图像降噪的算法,保持色调不变,针对饱和度和亮度分量的不同特征,分别采用了中值滤波和基于边缘提取的降噪方法。实验结果表明该算法在保持了色调和边缘不变的同时,提高了图像信噪比,降噪效果明显。  相似文献   

11.
Most low-light image enhancement methods only adjust the brightness, contrast and noise reduction of low-light images, making it difficult to recover the lost information in darker areas of the image, and even cause color distortion and blurring. To solve the above problems, a global attention-based Retinex network (GARN) for low-light image enhancement is proposed in this paper. We propose a novel global attention module which computes multiple dimensional information in the channel attention module to help facilitate inference learning. Then the global attention module is embedded into different layers of the network to extract richer shallow texture features and deep semantic features. This means that the rich features are more conducive to learning the mapping relationship between low-light images to normal-light images, so that the detail recovery of dark regions is enhanced in low-light images. We also collected a low/normal light image dataset with multiple scenes, in which the images paired as training set can succeed to be applied to low-light image enhancement under different lighting conditions. Experimental results on publicly available datasets show that our method has better effectiveness and generality than the state-of-the-art methods in terms of evaluations metrics such as PSNR, SSIM, NIQE, Entropy.  相似文献   

12.
The sensing light source of the line scan camera cannot be fully exposed in a low light environment due to the extremely small number of photons and high noise, which leads to a reduction in image quality. A multi-scale fusion residual encoder-decoder (FRED) was proposed to solve the problem. By directly learning the end-to-end mapping between light and dark images, FRED can enhance the image's brightness with the details and colors of the original image fully restored. A residual block (RB) was added to the network structure to increase feature diversity and speed up network training. Moreover, the addition of a dense context feature aggregation module (DCFAM) made up for the deficiency of spatial information in the deep network by aggregating the context's global multi-scale features. The experimental results show that the FRED is superior to most other algorithms in visual effect and quantitative evaluation of peak signa-to-noise ratio (PSNR) and structural similarity index measure (SSIM). For the factor that FRED can restore the brightness of images while representing the edge and color of the image effectively, a satisfactory visual quality is obtained under the enhancement of low-light.  相似文献   

13.
针对现有图像拼接检测网络模型存在边缘信息关注度不够、像素级精准定位效果不够好等问题,提出一种融入残差注意力机制的DeepLabV3+图像拼接篡改取证方法,该方法利用编-解码结构实现像素级图像的拼接篡改定位。在编码阶段,将高效注意力模块融入ResNet101的残差模块中,通过残差模块的堆叠以减小不重要的特征比重,凸显拼接篡改痕迹;其次,利用带有空洞卷积的空间金字塔池化模块进行多尺度特征提取,将得到的特征图进行拼接后通过空间和通道注意力机制进行语义信息建模。在解码阶段,通过融合多尺度的浅层和深层图像特征提升图像的拼接伪造区域的定位精度。实验结果表明,在CASIA 1.0、COLUMBIA和CARVALHO数据集上的拼接篡改定位精度分别达到了0.761、0.742和0.745,所提方法的图像拼接伪造区域定位性能优于一些现有的方法,同时该方法对JPEG压缩也具有更好的鲁棒性。  相似文献   

14.
This paper presents a novel approach for low-light image enhancement. We propose a deep simultaneous estimation network (DSE-Net), which simultaneously estimates the reflectance and illumination for low-light image enhancement. The proposed network contains three modules: image decomposition, illumination adjustment, and image refinement module. The DSE-Net uses a novel branched encoder–decoder based image decomposition module for simultaneous estimation. The proposed decomposition module uses a separate decoder to estimate illumination and reflectance. DSE-Net improves the estimated illumination using the illumination adjustment module and feeds it to the proposed refinement module. The image refinement module aims to produce sharp and natural-looking output. Extensive experiments conducted on a range of low-light images demonstrate the efficacy of the proposed model and show its supremacy over various state-of-the-art alternatives.  相似文献   

15.
为了提高由图像生成文字描述的准确率,文中提出了一种基于传统的编码解码框架,分别在编码端和解码端融入视觉注意力机制的方法,即在编码端加入空间注意力机制和图像通道级注意力机制相结合的方法。在解码端运用自适应视觉注意力机制的方法,即在传统的解码端上加入一个额外的“视觉哨兵”模块。文中提出的方法在生成文字描述的过程中自动决定是依赖图像特征还是依赖语义特征,并传递给相应的注意力机制。实验证明,相比较单一的视觉注意力机制,文中方法取得了较高的图像描述语句的正确率,具有更好的图像描述性能。  相似文献   

16.
针对传统去雾算法容易依赖先验知识以及恢复出来的清晰图像会产生颜色失真等问题,本文提出一种基于双注意力机制的雾天图像清晰化算法。首先将雾图输入编码器,经过下采样后得到特征图像;特征提取模块将多个特征提取基本块联结在一起,每个基本块由局部残差学习和特征注意模块组成,提高图像质量以及图像特征信息的利用率,增加网络训练的稳定性;然后通过通道注意力与多尺度空间注意力并行的结构处理特征图像,使得网络更加关注细节特征,提取更多关键信息,同时提高网络效率;最后将融合后的特征图像输入解码器中,经过多级映射,得到与输入大小匹配的雾密度图。实验结果表明,不论是对合成雾天图像或者真实雾天图像,本文算法能够高效地进行去雾处理,得到更自然的清晰图像。  相似文献   

17.

Images captured under low-light conditions often suffer from severe loss of structural details and color; therefore, image-enhancement algorithms are widely used in low-light image restoration. Image-enhancement algorithms based on the traditional Retinex model only consider the change in the image brightness, while ignoring the noise and color deviation generated during the process of image restoration. In view of these problems, this paper proposes an image enhancement network based on multi-stream information supplement, which contains a mainstream structure and two branch structures with different scales. To obtain richer feature information, an information complementary module is designed to realize the information supplement for the three structures. The feature information from the three structures is then concatenated to perform the final image recovery operation. To restore more abundant structures and realistic colors, we define a joint loss function by combining the L1 loss, structural similarity loss, and color-difference loss to guide the network training. The experimental results show that the proposed network achieves satisfactory performance in both subjective and objective aspects.

  相似文献   

18.
Fully convolutional Siamese network (SiamFC) has demonstrated high performance in the visual tracking field, but the learned CNN features are redundant and not discriminative to separate the object from the background. To address the above problem, this paper proposes a dual attention module that is integrated into the Siamese network to select the features both in the spatial and channel domains. Especially, a non-local attention module is followed by the last layer of the network, and this benefit to obtain the self-attention feature map of the target from the spatial dimension. On the other hand, a channel attention module is proposed to adjust the importance of different channels’ features according to the corresponding responses generated by each channel feature and the target. Additionally, the GOT10k dataset is employed to train our dual attention Siamese network (SiamDA) to improve the target representation ability, which enhances the discrimination of the model. Experimental results show that the proposed algorithm improves the accuracy by 7.6% and the success rate by 5.6% compared with the baseline tracker.  相似文献   

19.
In the low light conditions, images are corrupted by low contrast and severe noise, but event cameras capture event streams with clear edge structures. Therefore, we propose an Event-Guided Low Light Image Enhancement method using a dual branch generative adversarial networks and recover clear structure with the guide of events. To overcome the lack of paired training datasets, we first synthesize three datasets containing low-light event streams, low-light images, and the ground truth normal-light images. Then, in the generator network, we develop an end-to-end dual branch network consisting of a image enhancement branch and a gradient reconstruction branch. The image enhancement branch is employed to enhance the low light images, and the gradient reconstruction branch is utilized to learn the gradient from events. Moreover, we develops the attention based event-image feature fusion module which selectively fuses the event and low-light image features, and the fused features are concatenated into the image enhancement branch and gradient reconstruction branch, which respectively generate the enhanced images with clear structure and more accurate gradient images. Extensive experiments on synthetic and real datasets demonstrate that the proposed event guided low light image enhancement method produces visually more appealing enhancement images, and achieves a good performance in structure preservation and denoising over state-of-the-arts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号