首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
针对现有全色锐化网络无法同时兼顾空间信息与光谱信息保留的问题,提出一种基于小波系数指导的由融合网络和指导网络组成的全色锐化网络。融合网络分别提取PAN和MS图像的多级特征,并在同一级别进行特征的选择和融合,融合后的特征分别用于指导后一级别特征的提取;指导网络用于学习HRMS与已知的输入图像的小波系数之间的映射关系,并利用学习到的映射对融合网络的输出提供额外的监督。实验结果表明,该方法能够在保留MS图像光谱信息的同时恢复尽可能多的空间信息。在模拟数据集和真实数据集上的对比实验也表明,该方法融合效果优于其他传统方法和深度学习方法,具有一定的实用价值。  相似文献   

2.
Infrared and visible image fusion aims to synthesize a single fused image containing salient targets and abundant texture details even under extreme illumination conditions. However, existing image fusion algorithms fail to take the illumination factor into account in the modeling process. In this paper, we propose a progressive image fusion network based on illumination-aware, termed as PIAFusion, which adaptively maintains the intensity distribution of salient targets and preserves texture information in the background. Specifically, we design an illumination-aware sub-network to estimate the illumination distribution and calculate the illumination probability. Moreover, we utilize the illumination probability to construct an illumination-aware loss to guide the training of the fusion network. The cross-modality differential aware fusion module and halfway fusion strategy completely integrate common and complementary information under the constraint of illumination-aware loss. In addition, a new benchmark dataset for infrared and visible image fusion, i.e., Multi-Spectral Road Scenarios (available at https://github.com/Linfeng-Tang/MSRS), is released to support network training and comprehensive evaluation. Extensive experiments demonstrate the superiority of our method over state-of-the-art alternatives in terms of target maintenance and texture preservation. Particularly, our progressive fusion framework could round-the-clock integrate meaningful information from source images according to illumination conditions. Furthermore, the application to semantic segmentation demonstrates the potential of our PIAFusion for high-level vision tasks. Our codes will be available at https://github.com/Linfeng-Tang/PIAFusion.  相似文献   

3.
在基于多光谱(MS)影像和全色(PAN)遥感影像融合中,提高融合影像质量的一个关键问题是如何有效提取PAN影像的纹理特征信息,并有针对性地对MS影像进行信息注入.因此,文中提出基于相位拉伸变换(PST)相位约束的MS和PAN影像稀疏融合算法.首先对MS和PAN影像进行高斯滤波.对于中低频信息,基于PST相位差对影像中边缘和纹理区域的敏感性,通过高频信息PST的相位差获得融合权重约束.对于高频信息,通过学习PAN影像的高频信息获得训练字典,并利用字典对MS和PAN影像的高频信息进行稀疏表示和融合,提高融合高频信息的准确度.算法在一定程度上克服传统融合方法对边缘纹理区域融合效果较差和光谱信息扭曲等现象,取得更好的融合效果.大量仿真实验验证算法的有效性.  相似文献   

4.
Remote-sensing image fusion based on curvelets and ICA   总被引:2,自引:0,他引:2  
Improving the quality of pan-sharpened multispectral (MS) bands is the main aim of the recent research on pan-sharpening. In this article, we present a novel image fusion method based on combining the curvelet transform and independent component analysis (ICA). The idea is to map the MS bands onto a statistically independent domain to determine the intensity component, which contains the common information of the MS bands, and then to pan-sharpen it using curvelets and a modified adaptive fusion rule. The proposed method is evaluated by visual and statistical analyses and compared with the curvelet (CVT)-based method using a context-based decision model, the CVT-based method using the Dempster–Shafer evidence theory, the improved ICA method, and the combined adaptive principle component analysis (PCA)–Contourlet method. The experimental results using QuickBird and WorldView-2 data show that the proposed method effectively reduces the spectral distortion while injecting spatial details into the fused bands as much as possible.  相似文献   

5.
In this paper, an unsupervised learning-based approach is presented for fusing bracketed exposures into high-quality images that avoids the need for interim conversion to intermediate high dynamic range (HDR) images. As an objective quality measure – the colored multi-exposure fusion structural similarity index measure (MEF-SSIMc) – is optimized to update the network parameters, the unsupervised learning can be realized without using any ground truth (GT) images. Furthermore, an unreferenced gradient fidelity term is added in the loss function to recover and supplement the image information for the fused image. As shown in the experiments, the proposed algorithm performs well in terms of structure, texture, and color. In particular, it maintains the order of variations in the original image brightness and suppresses edge blurring and halo effects, and it also produces good visual effects that have good quantitative evaluation indicators. Our code will be publicly available at https://github.com/cathying-cq/UMEF.  相似文献   

6.
ABSTRACT

The pan-sharpening scheme combines high-resolution panchromatic imagery (HRPI) data and low-resolution multispectral imagery (LRMI) data to get a single merged high-resolution multispectral image (HRMI). The pan-sharpened image has extensive information that will promote the efficiency of image analysis methods. Pan-sharpening technique is considered as a pixel-level fusion scheme utilized for enhancing LRMI using HRPI while keeping LRMI spectral information. In this article, an efficient optimized integrated adaptive principal component analysis (APCA) and high-pass modulation (HPM) pan-sharpening method is proposed to get excellent spatial resolution within fused image with minimal spectral distortion. The proposed method is adjusted with multi-objective optimizationto determine the optimal window size and σfor the Gaussian low-pass filter (GLPF) and gain factor utilized for adding the high-pass details extracted from the HRPI to the LRMI principlecomponent of maximum correlation. Optimization results show that if the spatial resolution ratio of HRPI to LRMI is 0.50, then a GLPF of 5 × 5 window size and σ = 1.640 yields HRMI with low spectral distortion and high spatial quality. If the HRPI/LRMI spatial resolution ratio is 0.25, then a GLPF of 7 × 7 window size and σ = 1.686 yields HRMI with low spectral distortion and high spatial quality. Simulation tests demonstrated that the proposed optimized APCA–HPM fusion scheme gives adjustment between spectral quality and spatial quality and has small computational and memory complexity.  相似文献   

7.
基于 MTF 和变分的全色与多光谱图像融合模型   总被引:1,自引:0,他引:1  
Pan-sharpening将高分辨率图像全色(Panchromatic, Pan)波段的空间细节注入多光谱(Multispectral, MS)波段, 以生成同时具有高光谱和高空间分辨率的多光谱图像. 为改善融合效果, 需要考虑多光谱和全色波段的调制传输函数(Modulation transfer function, MTF). 本文提出了一个新的基于MTF和变分的Pan-sharpening模型. 该模型的能量泛函包括两项, 第1项为细节注入项, 基于高通滤波器从Pan波段中提取细节信息并注入融合图像;第2项为光谱保真项, 基于MTF设计多孔小波的低通滤波器以保持MS波段的多光谱信息. 在QuickBird、IKONOS和GeoEye数据集上的融合结果表明, 该模型可以生成同时具有高空间和高光谱质量的融合图像, 融合效果优于AWLP、IHS_BT、HPM-CC-PSF、NAWL、快速变分等算法.  相似文献   

8.
9.
In image fusion of different spatial resolution multispectral (MS) and panchromatic (PAN) images, a spectrally mixed MS pixel superimposes multiple mixed PAN pixels and multiple pure PAN pixels. This verifies that with increased spatial resolution in imaging, a low spatial resolution spectrally mixed subpixel may be unmixed to be a pure pixel. However, spectral unmixing of mixed MS subpixels is rarely considered in current remote-sensing image fusion methods, resulting in blurred fused images. In the image fusion method proposed in this article, such spectral unmixing is realized. In this method, the MS and PAN images are jointly segmented into image objects, image objects are classified to obtain a classification map of the PAN image and each MS subpixel is fused to be a pixel matching the class of the corresponding PAN pixel. Tested on spatially degraded IKONOS MS and PAN images with a significant spatial resolution ratio of 8:1, the fusion method offered fused images with high spectral quality and deblurred visualization.  相似文献   

10.
Recent studies show that hybrid panchromatic sharpening (pan-sharpening) methods using the non-sub-sampled contourlet transform (NSCT) and classical pan-sharpening methods such as intensity, hue and saturation (IHS), principal component analysis (PCA), and adaptive principal component analysis (APCA) reduce spectral distortion in pan-sharpened images. The NSCT is a shift-invariant multi-resolution decomposition. It is based on non-sub-sampled pyramid (NSP) decomposition and non-sub-sampled directional filter banks (NSDFBs). We compare the performance of the APCA–NSCT using different NSP filters, NSDFB filters, number of decomposition levels, and number of orientations in each level on SPOT 4 data with a spatial resolution ratio of 1:2, and Quickbird data with a spatial resolution ratio of 1:4. Experimental results show that the quality of pan-sharpening of remote-sensing images of different spatial resolution ratios using the APCA–NSCT method is affected by NSCT parameters. For the NSP, the ‘maxflat’ filters have the best quality, while the ‘sk’ filters give the best quality for the NSDFB. Changing the number of orientations at the same level of decomposition in the NSCT has a small effect on both the spectral and spatial qualities. The spectral and spatial qualities of pan-sharpened images mainly depend on the number of decomposition levels. Too few decomposition levels result in poor spatial quality, while excessive levels of decomposition result in poor spectral quality. Two levels of decomposition in the case of SPOT 4 data with a spatial resolution ratio of 1:2 achieve the best results. Also, three levels of decomposition in the case of QuickBird data with a spatial resolution ratio of 1:4 show the best results.  相似文献   

11.
In this article, we propose a new regularization-based approach for pan-sharpening based on the concepts of self-similarity and Gabor prior. The given low spatial resolution (LR) and high spectral resolution multi-spectral (MS) image is modelled as degraded and noisy version of the unknown high spatial resolution (HR) version. Since this problem is ill-posed, we use regularization to obtain the final solution. In the proposed method, we first obtain an initial HR approximation of the unknown pan-sharpened image using self-similarity and sparse representation (SR) theory. Using self-similarity, we obtain the HR patches from the given LR observation by searching for matching patches in its coarser resolution, thereby obtaining LR–HR pairs. An SR framework is used to obtain the patch pairs for which no matches are available for the patches in LR observation. The entire set of matched HR patches constitutes initial HR approximation (initial estimate) to the final pan-sharpened image which is used to estimate the degradation matrix as used in our model. A regularization framework is then used to obtain the final solution in which we propose to use a new prior which we refer as Gabor prior that extracts the bandpass details from the registered panchromatic (Pan) image. In addition, we also include Markov random field (MRF) smoothness prior that preserves the smoothness in the final pan-sharpened image. MRF parameter is derived using the initial estimate image. The final cost function consists of data fitting term and two prior terms corresponding to Gabor and MRF. Since the derived cost function is convex, simple gradient-based method is used to obtain the final solution. The efficacy of the proposed method is evaluated by conducting the experiments on degraded as well as on un-degraded datasets of three different satellites, i.e., Ikonos-2, Quickbird, and Worldview-2. The results are compared on the basis of traditional measures as well as recently proposed quality with no reference (QNR) measure, which does not require the reference image.  相似文献   

12.
为了更有效地结合高分辨率全色(PAN)图像细节信息和低分辨率多光谱(MS)图像光谱信息,提出了一种改进的全色锐化算法。首先,对低分辨率MS图像的强度通道进行下采样再上采样获取其低频成分;其次,用强度通道减去低频成分获取其高频成分,在获取到的高低频成分中进行随机采样来构建字典;然后,用构建好的过完备字典对高分辨率PAN图像进行分块分解以获取高频信息;最后,将分解出的高频信息注入到低分辨率MS图像中以重建高分辨率MS图像。经多组实验后发现,所提出的算法在主观上保留了光谱信息,并注入了大量的空间细节信息。对比结果表明,相比其他诸如基于成分替换算法、基于多分辨率分析算法、基于稀疏表示算法,所提算法重建出来的高分辨率MS图像更加清晰,且在相关系数等多种客观评价指标上优于对比算法。  相似文献   

13.
目的 全色图像的空间细节信息增强和多光谱图像的光谱信息保持通常是相互矛盾的,如何能够在这对矛盾中实现最佳融合效果一直以来都是遥感图像融合领域的研究热点与难点。为了有效结合光谱信息与空间细节信息,进一步改善多光谱与全色图像的融合质量,提出一种形态学滤波和改进脉冲耦合神经网络(PCNN)的非下采样剪切波变换(NSST)域多光谱与全色图像融合方法。方法 该方法首先分别对多光谱和全色图像进行非下采样剪切波变换;对二者的低频分量采用形态学滤波和高通调制框架(HPM)进行融合,将全色图像低频子带的细节信息注入到多光谱图像低频子带中得到融合后的低频子带;对二者的高频分量则采用改进脉冲耦合神经网络的方法进行融合,进一步增强融合图像中的空间细节信息;最后通过NSST逆变换得到融合图像。结果 仿真实验表明,本文方法得到的融合图像细节信息清晰且光谱保真度高,视觉效果上优势明显,且各项评价指标与其他方法相比整体上较优。相比于5种方法中3组融合结果各指标平均值中的最优值,清晰度和空间频率分别比NSCT-PCNN方法提高0.5%和1.0%,光谱扭曲度比NSST-PCNN方法降低4.2%,相关系数比NSST-PCNN方法提高1.4%,信息熵仅比NSST-PCNN方法低0.08%。相关系数和光谱扭曲度两项指标的评价结果表明本文方法相比于其他5种方法能够更好地保持光谱信息,清晰度和空间频率两项指标的评价结果则展示了本文方法具有优于其他对比方法的空间细节注入能力,信息熵指标虽不是最优值,但与最优值非常接近。结论 分析视觉效果及各项客观评价指标可以看出,本文方法在提高融合图像空间分辨率的同时,很好地保持了光谱信息。综合来看,本文方法在主观与客观方面均具有优于亮度色调饱和度(IHS)法、主成分分析(PCA)法、基于非负矩阵分解(CNMF)、基于非下采样轮廓波变换和脉冲耦合神经网络(NSCT-PCNN)以及基于非下采样剪切波变换和脉冲耦合神经网络(NSST-PCNN)5种经典及现有流行方法的融合效果。  相似文献   

14.
目的 遥感图像融合是将一幅高空间分辨率的全色图像和对应场景的低空间分辨率的多光谱图像,融合成一幅在光谱和空间两方面都具有高分辨率的多光谱图像。为了使融合结果在保持较高空间分辨率的同时减轻光谱失真现象,提出了自适应的权重注入机制,并针对上采样图像降质使先验信息变得不精确的问题,提出了通道梯度约束和光谱关系校正约束。方法 使用变分法处理遥感图像融合问题。考虑传感器的物理特性,使用自适应的权重注入机制向多光谱图像各波段注入不同的空间信息,以处理多光谱图像波段间的差异,避免向多光谱图像中注入过多的空间信息导致光谱失真。考虑到上采样的图像是降质的,采用局部光谱一致性约束和通道梯度约束作为先验信息的约束,基于图像退化模型,使用光谱关系校正约束更精确地保持融合结果的波段间关系。结果 在Geoeye和Pleiades卫星数据上同6种表现优异的算法进行对比实验,本文提出的模型在2个卫星数据上除了相关系数CC(correlation coefficient)和光谱角映射SAM(spectral angle mapper)评价指标表现不够稳定,偶尔为次优值外,在相对全局误差ERGAS(erreur relative globale adimensionnelle de synthèse)、峰值信噪比PSNR(peak signal-to-noise ratio)、相对平均光谱误差RASE(relative average spectral error)、均方根误差RMSE(root mean squared error)、光谱信息散度SID(spectral information divergence)等评价指标上均为最优值。结论 本文模型与对比算法相比,在空间分辨率提升和光谱保持方面都取得了良好效果。  相似文献   

15.
In remote-sensing image processing, pan-sharpening is used to obtain a high-resolution multi-spectral image by combining a low-resolution multi-spectral image with a corresponding high-resolution panchromatic image. In this article, to preserve the geometry, spectrum, and correlation information of the original images, three hypotheses are presented, i.e. (1) the geometry information contained in the pan-sharpened image should also be contained in the panchromatic bands; (2) the upsampled multi-spectral image can be seen as a blurred form of the fused image with an unknown kernel; and (3) the fused bands should keep the correlation between each band of the upsampled multi-spectral image. A variational energy functional is then built based on the assumptions, of which the minimizer is the target fused image. The existence of a minimizer of the proposed energy is further analysed, and the numerical scheme based on the split Bregman framework is presented. To verify the validity, the new proposed method is compared with several state-of-the-art techniques using QuickBird data in subjective, objective, and efficiency aspects. The results show that the proposed approach performs better than some compared methods according to the performance metrics.  相似文献   

16.
Combining the spectral information of a low-resolution multispectral (LRMS) image and the spatial information of a high-resolution panchromatic (HRP) image to generate a high-resolution multispectral (HRMS) image has become an important and interesting issue. Local dissimilarities between the LRMS image and the HRP image affect the performance of the pan-sharpening technique. This paper presents a model-based pan-sharpening method with global and nonlocal spatial similarity regularisers to reduce the effects of the local dissimilarities. The degraded model relating the LRMS image to the unknown HRMS image is employed as the data-fitting term to keep spectral fidelity. Two spatial similarity constraints are utilized to further enhance the spatial resolution of the unknown HRMS image. The first regularisation term is under the assumption that the high-pass component of each HRMS band has the similar geometry structure with the adjusted high-pass component of the HRP image. A modulation matrix is constructed to reduce the contrast differences. Moreover, nonlocal self-similarity characteristic of the high-pass component extracted from each HRMS band is considered as another regulariser, which is an effective structural prior to improve the local spatial quality of the HRMS image. The weights of nonlocal similarity model are learned from the high-pass component of available HRP image. Experiments conducted on QuickBird and IKONOS data validate that the proposed pan-sharpening method can achieve better performance compared with several traditional and state-of-the-art pan-sharpening algorithms in terms of quantitative evaluation and visual analysis.  相似文献   

17.
18.
In order to investigate the impacts of different information fusion techniques on change detection, a sequential fusion strategy combining pan-sharpening with decision level fusion is introduced into change detection from multi-temporal remotely sensed images. Generally, change map from multi-temporal remote sensing images using any single method or single kind of data source may contain a number of omission/commission errors, degrading the detection accuracy to a great extent. To take advantage of the merits of multi-resolution image and multiple information fusion schemes, the proposed procedure consists of two steps: (1) change detection from pan-sharpened images, and (2) final change detection map generation by decision level fusion. Impacts of different fusion techniques on change detection results are evaluated by unsupervised similarity metric and supervised accuracy indices. Multi-temporal QuickBird and ALOS images are used for experiments. The experimental results demonstrate the positive impacts of different fusion strategies on change detection. Especially, pan-sharpening techniques improve spatial resolution and image quality, which effectively reduces the omission errors in change detection; and decision level fusion integrates the change maps from spatially enhanced fusion datasets and can well reduce the commission errors. Therefore, the overall accuracy of change detection can be increased step by step by the proposed sequential fusion framework.  相似文献   

19.
Pansharpening is about fusing a high spatial resolution panchromatic image with a simultaneously acquired multispectral image with lower spatial resolution. In this paper, we propose a Laplacian pyramid pansharpening network architecture for accurately fusing a high spatial resolution panchromatic image and a low spatial resolution multispectral image, aiming at getting a higher spatial resolution multispectral image. The proposed architecture considers three aspects. First, we use the Laplacian pyramid method whose blur kernels are designed according to the sensors’ modulation transfer functions to separate the images into multiple scales for fully exploiting the crucial spatial information at different spatial scales. Second, we develop a fusion convolutional neural network (FCNN) for each scale, combining them to form the final multi-scale network architecture. Specifically, we use recursive layers for the FCNN to share parameters across and within pyramid levels, thus significantly reducing the network parameters. Third, a total loss consisting of multiple across-scale loss functions is employed for training, yielding higher accuracy. Extensive experimental results based on quantitative and qualitative assessments exploiting benchmarking datasets demonstrate that the proposed architecture outperforms state-of-the-art pansharpening methods. Code is available at https://github.com/ChengJin-git/LPPN.  相似文献   

20.
Due to the huge gap between the high dynamic range of natural scenes and the limited (low) range of consumer-grade cameras, a single-shot image can hardly record all the information of a scene. Multi-exposure image fusion (MEF) has been an effective way to solve this problem by integrating multiple shots with different exposures, which is in nature an enhancement problem. During fusion, two perceptual factors including the informativeness and the visual realism should be concerned simultaneously. To achieve the goal, this paper presents a deep perceptual enhancement network for MEF, termed as DPE-MEF. Specifically, the proposed DPE-MEF contains two modules, one of which responds to gather content details from inputs while the other takes care of color mapping/correction for final results. Both extensive experimental results and ablation studies are conducted to show the efficacy of our design, and demonstrate its superiority over other state-of-the-art alternatives both quantitatively and qualitatively. We also verify the flexibility of the proposed strategy on improving the exposure quality of single images. Moreover, our DPE-MEF can fuse 720p images in more than 60 pairs per second on an Nvidia 2080Ti GPU, making it attractive for practical use. Our code is available at https://github.com/dongdong4fei/DPE-MEF.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号