首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
在低照度环境下采集的图像往往亮度不足,导致在后续视觉任务中难以有效利用。针对这一问题,过去的低照度图像增强方法大多在极度低光场景中表现失败,甚至放大了图像中的底层噪声。为了解决这一难题,本文提出了一种新的基于深度学习的端到端神经网络,该网络主要通过空间和通道双重注意力机制来抑制色差和噪声,其中空间注意力模块利用图像的非局部相关性进行去噪,通道注意力模块用来引导网络细化冗余的色彩特征。实验结果表明,与其他主流算法相比,本文方法在主观视觉和客观评价指标上均得到了进一步提高。  相似文献   

2.
低照度条件下拍摄的照片往往存在亮度低、颜色失真、噪声高、细节退化等多重耦合问题,因此低照度图像增强是一个具有挑战性的任务.现有基于深度学习的低照度图像增强方法通常聚焦于对亮度和色彩的提升,导致增强图像中仍然存在噪声等缺陷.针对上述问题,本文提出了一种基于任务解耦的低照度图像增强方法,根据低照度图像增强任务对高层和低层特征的不同需求,将该任务解耦为亮度与色彩增强和细节重构两组任务,进而构建双分支低照度图像增强网络模型(Two-Branch Low-light Image Enhancement Network,TBLIEN).其中,亮度与色彩增强分支采用带全局特征的U-Net结构,提取深层语义信息改善亮度与色彩;细节重构分支采用保持原始分辨率的全卷积网络实现细节复原和噪声去除.此外,在细节重构分支中,本文提出一种半双重注意力残差模块,能在保留上下文特征的同时通过空间和通道注意力强化特征,从而实现更精细的细节重构.在合成和真实数据集上的广泛实验表明,本文模型的性能超越了当前先进的低照度图像增强方法,并具有更好的泛化能力,且可适用于水下图像增强等其他图像增强任务.  相似文献   

3.
目前,大量低照度图像中存在不同程度的饱和区域,这些图像主要是由前后背景亮度差异较大而形成的.对于该类低照度图像,如何在增强低照度区域的同时,尽量保留饱和区域细节纹理成为研究的难点.提出了一种基于光照重映射的低照度图像增强算法,该算法从相机成像原理出发,利用相机响应模型,通过区域化处理和非线性变换对亮度信息进行重新调整.实验结果表明,所提算法具有增强区域广、纹理保真度高、速度快等优点,在主观视觉评价和客观指标上均取得了较好的结果.  相似文献   

4.
针对低照度图像增强算法在实现细节增强的同时对噪声抑制考虑的不足问题,该文提出一种基于深度卷积神经网络的无参考低照度图像增强方法。首先,基于Retinex理论从输入的低照度图像中提取照射分量和反射分量,并分别对二者进行优化,随后将优化后的照射分量和反射分量相乘得到增强后的图像;同时,将3D块匹配(BM3D)的去噪效果融合进反射分量的优化过程中;最后,采用无参考图像训练的方式,并配合改进后的趋势一致性损失对网络参数进行更新。实验结果表明,该文算法相较于现有的主流算法,可有效地提升低照度图像的对比度和亮度,同时保持图像的自然性。  相似文献   

5.
为提升节目画面有效信息,提高广播电视智能监管的准确性,本文提出基于卷积自编码器的图像增强网络.该网络考虑到不同场景下对于正常照度的定义不同,实现了图像亮度的多等级映射和色彩保真.通过定性和定量结果可以看出,网络能够将低照度图像增强至参考亮度,且缓解了颜色失真的问题.  相似文献   

6.
复杂环境下的低照度图像具有光照分布不均、多光源叠加作用等特点,导致增强后的图像真实性不足、图像噪声增加等问题。针对低照度图像的特点,提出了一种基于深度注意力机制的低照度图像增强方法。设计生成对抗全局自注意力低照度增强网络(GSLE-GAN)以实现低照度图像的增强。在生成器中设计并使用注意力模块,提高模型对于光照分布特点的提取能力以及生成图像的真实性,采用局部鉴别器与全局鉴别器共同作用的方式使图像具有更丰富的细节信息,使用非配对数据及对模型进行训练,以提升模型的鲁棒性并进一步保证生成图像的真实性。通过对比实验,证明了文中所提方法的优越性,并在目标检测任务中证明了方法的有效性。  相似文献   

7.
低照度彩色图像增强在生活中起着重要作用,传统的低照度彩色图像增强算法往往会引起图像的不同程度失真。为了增强低照度彩色图像而又不引起图像失真,本文提出了一种新的低照度图像自适应对比度增强算法。将分数阶微积分、传统Retinex变分法与分段对数变换饱和度增强法相结合,构造一种新的分数阶Retinex图像增强算法。实验结果表明,该方法具有增强图像对比度的同时又能保持边缘和纹理细节的能力。与传统低照度图像增强算法相比,能突出图像的细节纹理信息,同时图像色度和亮度也有明显改善。  相似文献   

8.
为解决由于光的吸收和散射现象导致拍摄的水下图像呈现出严重色偏,对比度低等质量问题,本文提出轻量级特征融合网络和多颜色模型校正相结合的水下图像增强方法。首先使用自构建块代替卷积层的编码器和解码器结构的特征融合网络对水下图像色偏进行校正,网络中改进的特征融合模块降低全连接层对图像空间结构的破坏,保护空间特征,减少模块的参数量。同时改进的注意力模块并行池化计算提取特征图纹理细节且保护背景信息。然后使用多颜色模型校正模块根据像素之间关系进行校正,进一步减少色偏,提高对比度和亮度。实验结果表明,与最新的图像增强方法对比,在有参考图像数据集上,本文方法的NRMSE、PSNR和SSIM评价指标的平均值分别比第二名提升了9.3%、3.7%和2.3%。在无参考图像数据集上,本文方法的UCIQE、IE和NIQE评价指标的平均值比第二名提升了6.0%、2.9%和4.5%。综合主观感知和客观评价,本文方法能校正水下图像色偏,提升对比度和亮度,提高图像质量。  相似文献   

9.
针对低光照航拍图像亮度低、对比度弱、噪声多、细节缺失等问题,提出一种基于Retinex和多注意力机制的低光照航拍图像增强(MARNet)方法。首先,将低光照航拍图像分解为光照图和反射图,再将CBAM注意力机制引入噪声调整网络,让网络更加关注高噪区域,去除反射图中大量噪声;然后,设计了由上下采样结构组成的光照调整网络,引入通道注意力机制,提升光照图亮度,同时,加入区域损失函数,提高细节对比度;最后,为实现低光照近地面目标检测与跟踪,利用低光照图像合成方法,加入真实噪声,制作了一套低光照航拍配对数据集。实验结果表明,所提方法在提高图像亮度、减少噪声的同时还原了细节信息,3项性能指标PSNR,SSIM和NIQE及人类视觉感知效果均有所提升。  相似文献   

10.
江泽涛  钱艺  伍旭  张少钦 《电子学报》2021,49(11):2160-2165
为解决低照度图像增强过程中噪声放大、细节不足以及色彩还原问题,本文提出一种基于注意力机制残差密集生成对抗网络(Attention Residual Dense?Generative Adversarial Networks,ARD?GAN)的低照度图像增强方法.首先,该方法在全局光照估计模块(Global Illumination Estimation Module,GIEM)中生成全局曝光注意力图,以引导后续模块更好地进行照度增强;其次,使用卷积残差模块(Convolution and Residual Module,CRM)和基于通道注意力的残差密集模块(Channel Attention Residual Dense Module,CARDM)分别提取浅层特征和深层特征,并将不同层次的特征融合以获取更好的细节信息;然后,在CARDM基础上将密集连接与批归一化相结合抑制噪声;最后改进了损失函数,使增强后图像色彩还原更好.实验表明,ARD?GAN有与主流算法相比,在主观视觉和客观评价指标上均得到更好的效果.  相似文献   

11.
现有的多数图像增强方法通常整体增强亮度通道,会导致过度增强、细节丢失及颜色失真等问题。为克服这些问题,提出一种基于生成式对抗网络(Generative Adversarial Networks,GAN)和特征自我保留的弱光图像增强方法SFPGAN。首先从颜色、亮度及纹理3个方向评判生成图像的真实性,其次引入特征自我保留损失以保留原始图像的特征,最后使用含有一定量正常亮度和过度曝光的图像训练模型使模型获得较强的鲁棒性。大量实验证明,提出的方法在视觉质量和客观指标上都优于其他方法,并且更适应真实的图像。  相似文献   

12.

Images captured under low-light conditions often suffer from severe loss of structural details and color; therefore, image-enhancement algorithms are widely used in low-light image restoration. Image-enhancement algorithms based on the traditional Retinex model only consider the change in the image brightness, while ignoring the noise and color deviation generated during the process of image restoration. In view of these problems, this paper proposes an image enhancement network based on multi-stream information supplement, which contains a mainstream structure and two branch structures with different scales. To obtain richer feature information, an information complementary module is designed to realize the information supplement for the three structures. The feature information from the three structures is then concatenated to perform the final image recovery operation. To restore more abundant structures and realistic colors, we define a joint loss function by combining the L1 loss, structural similarity loss, and color-difference loss to guide the network training. The experimental results show that the proposed network achieves satisfactory performance in both subjective and objective aspects.

  相似文献   

13.
Most low-light image enhancement methods only adjust the brightness, contrast and noise reduction of low-light images, making it difficult to recover the lost information in darker areas of the image, and even cause color distortion and blurring. To solve the above problems, a global attention-based Retinex network (GARN) for low-light image enhancement is proposed in this paper. We propose a novel global attention module which computes multiple dimensional information in the channel attention module to help facilitate inference learning. Then the global attention module is embedded into different layers of the network to extract richer shallow texture features and deep semantic features. This means that the rich features are more conducive to learning the mapping relationship between low-light images to normal-light images, so that the detail recovery of dark regions is enhanced in low-light images. We also collected a low/normal light image dataset with multiple scenes, in which the images paired as training set can succeed to be applied to low-light image enhancement under different lighting conditions. Experimental results on publicly available datasets show that our method has better effectiveness and generality than the state-of-the-art methods in terms of evaluations metrics such as PSNR, SSIM, NIQE, Entropy.  相似文献   

14.
A noisy low-light image enhancement algorithm based on structure-texture-noise (STN) decomposition is proposed in this work. We split an input image into structure, texture, and noise components, and enhance the structure and texture components separately. More specifically, we first enhance the contrast of the structure image, by extending a 2D-histogram-based image enhancement scheme based on the characteristics of low-light images. Then, we reconstruct the texture image by retrieving residual texture components from the noise image and enhance it by exploiting the perceptual response of the human visual system (HVS). Experimental results on both synthetic and real-world images demonstrate that the proposed STN algorithm sharpens the texture and enhances the contrast more effectively than conventional algorithms, while providing robust performance under various noise and illumination conditions.  相似文献   

15.
一种基于U-Net生成对抗网络的低照度图像增强方法   总被引:3,自引:0,他引:3       下载免费PDF全文
江泽涛  覃露露 《电子学报》2020,48(2):258-264
在低照度环境下采集的图像具有低信噪比、低对比度及低分辨率等特点,导致图像难以识别利用.为了提升低照度图像的质量,本文提出一种基于U-Net生成对抗网络的低照度图像增强方法.首先利用U-Net框架实现生成对抗网络中的生成网络,然后利用该生成对抗网络学习从低照度图像到正常照度图像的特征映射,最终实现低照度图像的照度增强.实验结果表明,与主流算法相比,本文提出的方法能够更有效的提升低照度图像的亮度与对比度.  相似文献   

16.
In order to enhance the contrast of low-light images and reduce noise in them, we propose an image enhancement method based on Retinex theory and dual-tree complex wavelet transform (DT-CWT). The method first converts an image from the RGB color space to the HSV color space and decomposes the V-channel by dual-tree complex wavelet transform. Next, an improved local adaptive tone mapping method is applied to process the low frequency components of the image, and a soft threshold denoising algorithm is used to denoise the high frequency components of the image. Then, the V-channel is rebuilt and the contrast is adjusted using white balance method. Finally, the processed image is converted back into the RGB color space as the enhanced result. Experimental results show that the proposed method can effectively improve the performance in terms of contrast enhancement, noise reduction and color reproduction.  相似文献   

17.
为了提升基于特征点的双目视觉定位算法在低光照环境下定位的准确性,提出一种基于在线估计的视觉同步定位与地图构建(simultaneous localization and mapping,SLAM)低光照图像增强算法.通过在线估计图像亮度值,实时更新图像增强算法的参数,解决了基于固定参数的图像增强算法在图像较亮、较暗等情况下的不适用性问题.首先,通过ORB-SLAM2系统寻找定位准确度的影响因素,并通过在线估计参数的方法实时更新相关参数.其次,利用低光照图像增强算法(low-light image enhancement,LIME)改善图像效果.最后,根据增强后的图像进行特征点提取,提升了特征匹配准确度,进而提升了定位的准确度.在公开EuRoC数据集上,通过与目前广泛使用的ORB-SLAM2算法进行对比实验,结果表明本文提出的视觉SLAM系统,具有更好的定位准确性及鲁棒性.  相似文献   

18.
针对具有强反射的表面光条图像出现散斑或复合散斑等严重噪声情况,该文提出一种利用分数阶微分增强的图像去噪声的处理算法,突出噪声的颗粒化特征,通过连通区域面积统计的方法对有效连续光条进行分离并去除散斑噪声,获得有效光条图像,最后利用灰度重心法提取有效光条的中心。经实验对比,该方法得到的信息熵值和光条中心提取精度都显著提高,体现了分数阶微分算法增强图像高频信息的同时,有效保留更多的低频信息的特点,保留了更多的图像纹理细节,显著提高了特征光条中心提取精度。  相似文献   

19.
The sensing light source of the line scan camera cannot be fully exposed in a low light environment due to the extremely small number of photons and high noise, which leads to a reduction in image quality. A multi-scale fusion residual encoder-decoder (FRED) was proposed to solve the problem. By directly learning the end-to-end mapping between light and dark images, FRED can enhance the image's brightness with the details and colors of the original image fully restored. A residual block (RB) was added to the network structure to increase feature diversity and speed up network training. Moreover, the addition of a dense context feature aggregation module (DCFAM) made up for the deficiency of spatial information in the deep network by aggregating the context's global multi-scale features. The experimental results show that the FRED is superior to most other algorithms in visual effect and quantitative evaluation of peak signa-to-noise ratio (PSNR) and structural similarity index measure (SSIM). For the factor that FRED can restore the brightness of images while representing the edge and color of the image effectively, a satisfactory visual quality is obtained under the enhancement of low-light.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号