首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
针对多聚焦图像融合中缺乏细节保护和结构不连续的不足,提出了一种基于图像分解的多聚焦图像融合算法.首先,源图像采用卡通纹理图像分解得到卡通部分和纹理部分;其次,卡通部分采用卷积稀疏表示的方法进行融合,纹理部分采用字典学习进行融合;最后,将卡通和纹理部分融合得到融合图像.实验建立在标准的融合数据集中,并与传统和最近的融合方法进行比较.实验结果证明,该算法所获得的融合结果在方差和信息熵上具有更好的表现,该算法能够有效克服多聚焦图像融合中缺乏细节保护和结构不连续的缺点,同时有更好的视觉效果.  相似文献   

2.
Constructing a good dictionary is the key to a successful image fusion technique in sparsity-based models. An efficient dictionary learning method based on a joint patch clustering is proposed for multimodal image fusion. To construct an over-complete dictionary to ensure sufficient number of useful atoms for representing a fused image, which conveys image information from different sensor modalities, all patches from different source images are clustered together with their structural similarities. For constructing a compact but informative dictionary, only a few principal components that effectively describe each of joint patch clusters are selected and combined to form the over-complete dictionary. Finally, sparse coefficients are estimated by a simultaneous orthogonal matching pursuit algorithm to represent multimodal images with the common dictionary learned by the proposed method. The experimental results with various pairs of source images validate effectiveness of the proposed method for image fusion task.  相似文献   

3.
The goal of image fusion is to accurately and comprehensively describe complementary information of multiple source images in a new scene. Traditional fusion methods are easy to produce side-effects which cause artifacts and blurred edges. To solve these problems, a novel fusion algorithm based on robust principal component analysis (RPCA) and guided filter is proposed. The guided filter can preserve the edges effectively, which is often used to enhance the images without distort the details. Considering edges and flat area are treated differently by the guided filter, in this paper, sparse component of the source image is filtered by the guided filter to generate the enhanced image which contains the preserved edges and the enhanced background. And then the focused regions of the source images are detected by spatial frequency map of the difference images between the enhanced image and the corresponding source image. Finally, morphological algorithm is used to obtain precise fusion decision map. Experimental results show that the proposed method improves the fusion performance obviously which outperforms the current fusion methods.  相似文献   

4.
针对现有的多聚焦图像融合方法对聚焦/散焦边界(FDB)信息捕捉不准确的问题,提出了一种新的基于线性稀疏表示和图像抠图的多聚焦图像融合方法。首先,引入一种基于线性稀疏表示的焦点测度,它利用自然图像形成的字典与输入图像在局部窗口上的线性关系,通过求解线性系数来表示图像的焦点信息。然后,利用焦点测度获取源图像的焦点图和一个由聚焦区域、散焦区域以及包含FDB的未知区域组成的三元图,并将三元图作为一个输入,采用图像抠图技术处理源图像的FDB区域,从而得到较精确的全聚焦图像。最后,为了进一步提高融合图像的质量,将得到的全聚焦图像作为新字典实现融合过程的迭代进行,在经过设定的更新次数后得到最终的全聚焦融合图像。实验结果表明,相比于11种最先进的多聚焦图像融合方法,该方法具有较好的融合性能和视觉效果,且有较高的计算效率。  相似文献   

5.
为解决多聚焦图像融合算法中细节信息保留受限的问题,提出改进稀疏表示与积化能量和的多聚焦图像融合算法。首先,对源图像采用非下采样剪切波变换,得到低频子带系数和高频子带系数。接着,通过滑动窗口技术从低频子带系数中提取图像块,构造联合局部自适应字典,利用正交匹配追踪算法计算得到稀疏表示系数,利用方差能量加权规则得到融合后的稀疏系数,再通过反向滑动窗口技术获得融合后的低频子带系数;然后,对于高频子带系数提出积化能量和的融合规则,得到融合后高频子带系数;最后,通过逆变换获得融合图像。实验结果表明,该算法能保留更详细的细节信息,在视觉质量和客观评价上具有一定的优势。  相似文献   

6.
针对目前基于稀疏表示的常用图像融合算法计算复杂度高以及忽略图像局部特征的问题,提出多尺度稀疏表示(multi-scale sparse representation,MSR)的图像融合方法.充分利用小波多尺度分析较好突出图像局部特征的特点,将其和过完备稀疏表示有效结合;待融合图像在小波解析域中进行小波多层分解,对每个尺度的特征运用K-SVD (kernel singular value decomposition)多尺度字典进行OMP (orthogonal matching pursuit)稀疏编码,并在小波域中各个尺度中进行融合.实验结果表明,与传统的小波变换、轮廓波变换、稀疏表示融合算法相比,该算法更能保证图像局部特征的完整性,实现更好的性能.  相似文献   

7.
邹佳彬  孙伟 《计算机应用》2018,38(3):859-865
为抑制传统小波变换在多聚焦图像融合中产生的伪吉布斯现象,以及克服传统稀疏表示的融合方法容易造成融合图像的纹理与边缘等细节特征趋于平滑的缺陷,提高多聚焦图像融合的效率与质量,采用一种基于提升静态小波变换(LSWT)与联合结构组稀疏表示的图像融合算法。首先对实验图像进行提升静态小波变换,根据分解后得到的低频系数与高频系数各自不同的物理特征,采用不同的融合方式。选择低频系数时,采用基于联合结构组稀疏表示的系数选择方案;选择高频系数时,采用方向区域拉普拉斯能量和(DRSML)与匹配度相结合的系数选择方案。最后经逆变换重构得到最终融合图像。实验结果表明,改进的算法有效地提高了图像的互信息量、平均梯度等指标,完好地保留图像的纹理与边缘等细节信息,融合图像效果更好。  相似文献   

8.
针对传统基于K阶奇异值分解(KSVD)的字典学习算法时间复杂度高,学习字典对源图像的表达能力不理想,应用于医学图像融合效果差的问题,提出了一种新的字典学习方法:在字典学习之前对医学图像的特征信息进行筛选,选取能量和细节信息丰富的图像块作为训练集学习字典;根据学习得到的字典建立源图像的稀疏表示模型,运用正交匹配追踪算法(OMP)求解每个图像块的稀疏系数,采用"绝对值最大"策略构造融合图像的稀疏表示系数,最终得到融合图像.实验结果表明:针对不同的医学图像,提出的方法有效.  相似文献   

9.
In image fusion literature, multi-scale transform (MST) and sparse representation (SR) are two most widely used signal/image representation theories. This paper presents a general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods. In our fusion framework, the MST is firstly performed on each of the pre-registered source images to obtain their low-pass and high-pass coefficients. Then, the low-pass bands are merged with a SR-based fusion approach while the high-pass bands are fused using the absolute values of coefficients as activity level measurement. The fused image is finally obtained by performing the inverse MST on the merged coefficients. The advantages of the proposed fusion framework over individual MST- or SR-based method are first exhibited in detail from a theoretical point of view, and then experimentally verified with multi-focus, visible-infrared and medical image fusion. In particular, six popular multi-scale transforms, which are Laplacian pyramid (LP), ratio of low-pass pyramid (RP), discrete wavelet transform (DWT), dual-tree complex wavelet transform (DTCWT), curvelet transform (CVT) and nonsubsampled contourlet transform (NSCT), with different decomposition levels ranging from one to four are tested in our experiments. By comparing the fused results subjectively and objectively, we give the best-performed fusion method under the proposed framework for each category of image fusion. The effect of the sliding window’s step length is also investigated. Furthermore, experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance, especially for the fusion of multimodal images.  相似文献   

10.
目的:多聚焦图像融合技术一个关键问题是如何准确地判断待融合图像的清晰度。本文提出了基于归一化结构极值点数目的清晰度判断准则。方法:本文基于图像的局部极值点特性,定义了归一化结构极值点数目这个指标作为清晰度判断准则,同时还给出了利用该准则和融合决策矩阵快速估计技术的多聚焦图像快速融合方法。结果:利用本文提出的清晰度判断准则和融合方法,实验表明上述问题得到了很好的解决。结论:本文提出了一个新的图像清晰度判断准则,该准则判断准确率高,且对脉冲噪声有好的鲁棒性。通过与传统融合方法对两组实验图像融合结果的主客观比较表明,该方法的融合速度和效果比现有多聚焦图像融合方法有明显提高。  相似文献   

11.
基于人类视觉系统及信号的过完备稀疏表示理论,提出一种新的多聚焦图像融合算法。首先从待融合图像中随机取块构成训练样本集,经迭代运算获取过完备字典;然后由正交匹配追踪算法完成图像块的稀疏分解;再按分解系数的显著性选择融合系数并完成图像块的重构;重构块经重新排列并取平均后获得最后的融合图像。实验结果表明:该算法继承了目前较为优秀的多尺度几何分析方法的融合效果;在噪声存在的情况下,该算法表现出较好的噪声抑制能力,随噪声方差的升高,融合图像的主观质量及客观评价指标均要好于传统方法。  相似文献   

12.
Recent researches have shown that the sparse representation based technology can lead to state of art super-resolution image reconstruction (SRIR) result. It relies on the idea that the low-resolution (LR) image patches can be regarded as down sampled version of high-resolution (HR) images, whose patches are assumed to have a sparser presentation with respect to a dictionary of prototype patches. In order to avoid a large training patches database and obtain more accurate recovery of HR images, in this paper we introduce the concept of examples-aided redundant dictionary learning into the single-image super-resolution reconstruction, and propose a multiple dictionaries learning scheme inspired by multitask learning. Compact redundant dictionaries are learned from samples classified by K-means clustering in order to provide each sample a more appropriate dictionary for image reconstruction. Compared with the available SRIR methods, the proposed method has the following characteristics: (1) introducing the example patches-aided dictionary learning in the sparse representation based SRIR, in order to reduce the intensive computation complexity brought by enormous dictionary, (2) using the multitask learning and prior from HR image examples to reconstruct similar HR images to obtain better reconstruction result and (3) adopting the offline dictionaries learning and online reconstruction, making a rapid reconstruction possible. Some experiments are taken on testing the proposed method on some natural images, and the results show that a small set of randomly chosen raw patches from training images and small number of atoms can produce good reconstruction result. Both the visual result and the numerical guidelines prove its superiority to some start-of-art SRIR methods.  相似文献   

13.
基于过完备字典的图像稀疏表示是一种新的图像表示理论,利用过完备字典的冗余性可以有效地捕捉图像的各种结构特征,从而实现图像的有效表示。采用基于过完备字典稀疏表示的方法实现SAR图像的压缩。为了得到表示图像所需要的信息,只需要存储稀疏分解的系数极其对应的坐标,实现压缩的目的。采用K-SVD算法实现过完备字典的构造。K-SVD算法是一种基于学习的算法,由于训练样本全部来自于图像本身,因此字典能够更好地逼近图像本身的结构,实现稀疏表示。仿真表明对于SAR图像的压缩,算法是有效的,并且优于基于DCT的Jpeg算法和基于小波变换的EZW和SPIHT算法。  相似文献   

14.
针对多光谱图像与全色图像的融合,提出一种结合小波变换和稀疏表示的融合算法.该算法充分利用小波变换具有保持光谱信息这一优势,首先对多光谱图像进行IHS (intensity-hue-satuation)变换,然后对亮度分量和全色图像进行单层小波变换,得到对应的高低频系数.分析高低频系数的特征,对于不能认为是“稀疏”的低频系数采用稀疏表示进行融合;对于可以认为是“稀疏”的高频系数采用图像信息融合规则进行融合.最后进行小波逆变换和IHS逆变换得到融合结果.实验结果表明,该算法最大限度地保留了光谱信息,并提高了空间分辨率.  相似文献   

15.
16.
In this paper, we propose a novel sparse representation based framework for classifying complicated human gestures captured as multi-variate time series (MTS). The novel feature extraction strategy, CovSVDK, can overcome the problem of inconsistent lengths among MTS data and is robust to the large variability within human gestures. Compared with PCA and LDA, the CovSVDK features are more effective in preserving discriminative information and are more efficient to compute over large-scale MTS datasets. In addition, we propose a new approach to kernelize sparse representation. Through kernelization, realized dictionary atoms are more separable for sparse coding algorithms and nonlinear relationships among data are conveniently transformed into linear relationships in the kernel space, which leads to more effective classification. Finally, the superiority of the proposed framework is demonstrated through extensive experiments.  相似文献   

17.
Multi-focus image fusion methods can be mainly divided into two categories: transform domain methods and spatial domain methods. Recent emerged deep learning (DL)-based methods actually satisfy this taxonomy as well. In this paper, we propose a novel DL-based multi-focus image fusion method that can combine the complementary advantages of transform domain methods and spatial domain methods. Specifically, a residual architecture that includes a multi-scale feature extraction module and a dual-attention module is designed as the basic unit of a deep convolutional network, which is firstly used to obtain an initial fused image from the source images. Then, the trained network is further employed to extract features from the initial fused image and the source images for a similarity comparison, aiming to detect the focus property of each source pixel. The final fused image is obtained by selecting corresponding pixels from the source images and the initial fused image according to the focus property map. Experimental results show that the proposed method can effectively preserve the original focus information from the source images and prevent visual artifacts around the boundary regions, leading to more competitive qualitative and quantitative performance when compared with the state-of-the-art fusion methods.  相似文献   

18.
结合小波变换和自适应分块的多聚焦图像快速融合   总被引:1,自引:0,他引:1       下载免费PDF全文
提出一种基于小波变换和自适应分块相结合的多聚焦图像快速融合算法。该算法以小波变换为框架,对小波低频系数采用自适应尺寸分块的方法进行融合,图像块的尺寸由差分进化算法优化求解,然后对此低频融合结果进行精细化处理,得到一幅能精确到每个系数来源的标签图,再利用局部小波能量与该标签图相结合的方法对小波高频系数进行融合,最后重构得到融合结果。实验表明,该算法的融合结果在主观视觉效果和客观评价准则两方面均可以接近甚至达到图像融合领域的最好水平,且在提高融合质量和降低运算代价间取得较好的折衷。  相似文献   

19.
目的 稀疏表示在遥感图像融合上取得引人注目的良好效果,但由于经典稀疏表示没有考虑图像块与块之间的相似性,导致求解出的稀疏系数不够准确及字典学习的计算复杂度高。为提高稀疏表示遥感图像融合算法的效果和快速性,提出一种基于结构组稀疏表示的遥感图像融合方法。方法 首先,将相似图像块组成结构组,再通过组稀疏表示算法分别计算亮度分量和全色图像的自适应组字典和组稀疏系数;然后,根据绝对值最大规则进行全色图像稀疏系数的部分替换得到新的稀疏系数,利用全色图像的组字典和新的稀疏系数重构出高空间分辨率亮度图像;最后,应用通用分量替换(GCOS)框架计算融合后的高分辨率多光谱图像。结果 针对3组不同类型遥感图像的全色图像和多光谱图像分别进行了退化和未退化遥感融合实验,实验结果表明:在退化融合实验中,本文方法的相关系数、均方根误差、相对全局融合误差、通用图像质量评价指标和光谱角等评价指标比传统的融合算法更优越,其中相对全局融合误差分别是2.326 1、1.888 5和1.816 8均远低于传统融合算法;在未退化融合实验中,除了在绿色植物融合效果上略差于AWLP(additive wavelet luminance proportional)方法外,其他融合结果仍占有优势。与经典稀疏表示方法相比,由于字典学习的优越性,计算复杂度上要远低于经典稀疏表示的遥感图像融合算法。结论 本文算法更能保持图像的光谱特性和空间信息,适用于不同类型遥感图像的全色图像和多光谱图像融合。  相似文献   

20.
The depth of field (DOF) of camera equipment is generally limited, so it is very difficult to get a fully focused image with all the objects clear after taking only one shot. A way to obtain a fully focused image is to use a multi-focus image fusion method, which fuses multiple images with different focusing depths into one image. However, most of the existing methods focus too much on the fusion accuracy of a single pixel, ignoring the integrity of the target and the importance of shallow features, resulting in internal errors and boundary artifacts, which need to be repaired after a long time of post-processing. In order to solve these problems, we propose a cascade network based on Transformer and attention mechanism, which can directly obtain the decision map and fusion result of focusing/defocusing region through end-to-end processing of source image, avoiding complex post-processing. For improving the fusion accuracy, this paper introduces the joint loss function, which can optimize the network parameters from three aspects. Furthermore, In order to enrich the shallow features of the network, a global attention module with shallow features is designed. Extensive experiments were conducted, including a large number of ablation experiments, 6 objective measures and a variety of subjective visual comparisons. Compared with 9 state-of-the-art methods, the results show that the proposed network structure can improve the quality of multi-focus fusion images and the performance is optimal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号