首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 22 毫秒
1.
An image enhancement algorithm based on multiscale top-hat by reconstruction is proposed in this paper. Firstly, multiscale top-hat by reconstruction using multiscale structuring elements is discussed. Then, multiscale bright and black image regions are extracted. Thirdly, useful image regions for image enhancement are obtained from the extracted multiscale bright and black image regions. Finally, after a base image is calculated from the results of the opening and closing by reconstruction operations, the original image is enhanced through combing the useful image regions into the base image. Experimental results on different types of images show that the proposed algorithm is efficient.  相似文献   

2.
一种基于区域特性的红外与可见光图像融合算法   总被引:2,自引:2,他引:0  
叶传奇  王宝树  苗启广 《光子学报》2009,38(6):1498-1503
提出了一种基于区域分割和à trous小波变换的红外与可见光图像融合算法.首先,对红外与可见光图像进行区域分割及区域关联,并按关联映射图所划分区域提取红外与可见光图像的的能量信息及梯度信息;然后,对红外与可见光图像进行多尺度à trous小波变换分解,分解后的低频部分按照文中所提出的区域能量比和区域清晰比指标进行区域融合,高频部分采用绝对值取大算子进行融合;最后进行重构得到融合图像.结果表明,该算法既可保持可见光图像的光谱信息,又可有效获取红外图像的热目标信息.  相似文献   

3.
The purpose of image fusion is to combine useful image features of different original images into the final fusion image, which will produce one useful result image for different applications. One of the main difficulties of image fusion is extracting useful image features of different original images. In some cases, useful image features are local image features of the whole image. To efficiently extract local image features and produce an efficient fusion result, an image fusion algorithm based on the extracted local image features by using multi-scale top-hat by reconstruction operators is proposed in this paper. Firstly, multi-scale local feature extraction using multi-scale top-hat by reconstruction operators is discussed. Then, based on the extracted multi-scale local features of different original images, the useful image features for image fusion are constructed. Finally, the constructed useful image features for image fusion are combined into the final fusion image. Experimental results on different types of images show that, the proposed algorithm performs well for image fusion.  相似文献   

4.
Integration of infrared and visible images is an active and important topic in image understanding and interpretation. In this paper, a new fusion method is proposed based on the improved multi-scale center-surround top-hat transform, which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of infrared and visible images are respectively extracted at different scale levels by the improved multi-scale center-surround top-hat transform. Secondly, the feature regions at the same scale in both images are combined by multi-judgment contrast fusion rule, and the final feature images are obtained by simply adding all scales of feature images together. Then, a base image is calculated by performing Gaussian fuzzy logic combination rule on two smoothed source images. Finally, the fusion image is obtained by importing the extracted bright and dark feature images into the base image with a suitable strategy. Both objective assessment and subjective vision of the experimental results indicate that the proposed method is superior to current popular MST-based methods and morphology-based methods in the field of infrared-visible images fusion.  相似文献   

5.
This paper presents a fusion method for infrared–visible image and infrared-polarization image based on multi-scale center-surround top-hat transform which can effectively extract the feature information and detail information of source images. Firstly, the multi-scale bright (dark) feature regions of source images at different scale levels are respectively extracted by multi-scale center-surround top-hat transform. Secondly, the bright (dark) feature regions at different scale levels are refined for eliminating the redundancies by spatial scale. Thirdly, the refined bright (dark) feature regions from different scales are combined into the fused bright (dark) feature regions through adding. Then, a base image is calculated by performing dilation and erosion on the source images with the largest scale outer structure element. Finally, the fusion image is obtained by importing the fused bright and dark features into the base image with a reasonable strategy. Experimental results indicate that the proposed fusion method can obtain state-of-the-art performance in both aspects of objective assessment and subjective visual quality.  相似文献   

6.
陈龙  郭宝龙  孙伟 《光子学报》2014,39(11):2101-2106
针对同一场景多聚焦图像的融合问题,提出了一种基于方向区域特性的Contourlet域多聚焦图像融合算法.该算法对图像进行Contourlet变换,分解为不同尺度、不同方向的高低频子带|低频和高频子带分别采用方向区域的方差匹配度和能量作为融合规则|最后通过反变换得到融合图像.结果表明,所提出的方向区域方法能够更好地体现二维图像中的曲线或直线状边缘特征,是一种有效可行的图像融合算法.  相似文献   

7.
Multifocus image fusion aims at overcoming imaging cameras's finite depth of field by combining information from multiple images with the same scene. For the fusion problem of the multifocus image of the same scene, a novel algorithm is proposed based on multiscale products of the lifting stationary wavelet transform (LSWT) and the improved pulse coupled neural network (PCNN), where the linking strength of each neuron can be chosen adaptively. In order to select the coefficients of the fused image properly with the source multifocus images in a noisy environment, the selection principles of the low frequency subband coefficients and bandpass subband coefficients are discussed, respectively. For choosing the low frequency subband coefficients, a new sum modified-Laplacian (NSML) of the low frequency subband, which can effectively represent the salient features and sharp boundaries of the image in the LSWT domain, is an input to motivate the PCNN neurons; when choosing the high frequency subband coefficients, a novel local neighborhood sum of Laplacian of multiscale products is developed and taken as one type of feature of high frequency to motivate the PCNN neurons. The coefficients in the LSWT domain with large firing times are selected as coefficients of the fused image. Experimental results demonstrate that the proposed fusion approach outperforms the traditional discrete wavelet transform (DWT)-based, LSWT-based and LSWT-PCNN-based image fusion methods even though the source image is in a noisy environment in terms of both visual quality and objective evaluation.  相似文献   

8.
基于小波变换的不同融合规则的图像融合研究   总被引:17,自引:6,他引:11  
提出了利用小波变换按照不同融合规则及融合算子去构造融合图像对应的小波系数,得到一种新的基于多尺度分解的像素的图像融合方法.通过对融合图像统计参数对比表明,所提出的基于小波变换的像素局部能量作为准则的融合效果更好,既可避免传统融合规则存在的信息损失,又提高了融合图像的空间分辨率和清晰度,融合图像符合人的视觉特性,更有利于机器视觉.  相似文献   

9.
An algorithm is presented for multi-sensor image fusion using discrete wavelet frame transform (DWFT).The source images to be fused are firstly decomposed by DWFT. The fusion process is the combining of the source coefficients. Before the image fusion process, image segmentation is performed on each source image in order to obtain the region representation of each source image. For each source image, the salience of each region in its region representation is calculated. By overlapping all these region representations of all the source images, we produce a shared region representation to label all the input images. The fusion process is guided by these region representations. Region match measure of the source images is calculated for each region in the shared region representation. When fusing the similar regions, weighted averaging mode is performed; otherwise selection mode is performed. Experimental results using real data show that the proposed algorithm outperforms the traditional pyramid transform based or discrete wavelet transform (DWT) based algorithms in multi-sensor image fusion.  相似文献   

10.
In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.  相似文献   

11.
To efficiently enhance images, a novel algorithm using multi scale image features extracted by top-hat transform is proposed in this paper. Firstly, the multi scale bright and dim regions are extracted through top-hat transform using structuring elements with the same shape and increasing sizes. Then, two types of multi scale image features, which are the multi scale bright and dim image regions at each scale and the multi scale image details between neighboring scales, are extracted and used to form the final extracted bright and dim image regions. Finally, the image is enhanced through enlarging the contrast between the final extracted bright and dim image features. Experimental results on images from different applications verified that the proposed algorithm could efficiently enhance the contrast and details of image, and produce few noise regions.  相似文献   

12.
基于二代curvelet变换的图像融合研究   总被引:34,自引:0,他引:34  
李晖晖  郭雷  刘航 《光学学报》2006,26(5):57-662
曲波(Curvelet)作为一种新的多尺度分析方法比小波更加适合分析二维图像中的曲线或直线状边缘特征,而且具有更高的逼近精度和更好的稀疏表达能力.将curvelet变换引入图像融合,能够更好地提取原始图像的特征,为融合图像提供更多的信息.第二代curvelet理论的提出也使得其理论更易理解和实现.因此,提出了一种基于第二代curvelet变换的图像融合方法,首先将图像进行curvelet变换,然后在相应尺度上利用融合规则将curvelet系数融合,最后进行重构得到融合结果.对多聚焦图像进行了实验,采用均方误差、偏差指数和相关系数对融合结果进行了客观评价,并与基于小波变换的融合进行了比较,实验结果表明该方法除分解2层时与小波性能相当,取其他分解层数时均获得更好的融合效果.  相似文献   

13.
In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.  相似文献   

14.
针对红外与可见光图像特点,提出一种基于小波包变换的融合算法。该算法先对源图像进行小波包分解,得到低频分量和各带通方向子带分量,并对不同分量采用不同的融合规则进行融合处理,得到各融合系数,然后经小波包重构获得融合图像。该方法可提取源图像细节信息,取得较好的融合效果。  相似文献   

15.
基于区域分割和Counterlet变换的图像融合算法   总被引:12,自引:4,他引:8  
提出了一种基于区域分割和Contourlet变换的图像融合算法。首先,对各源图像做区域分割,并利用区域能量比和区域清晰比的概念来度量和提取区域信息;然后,对各源图像进行多尺度非子采样Contourlet分解,分解后的高频部分采用绝对值取大算子进行融合,低频部分则采用基于区域的融合规则和算子进行融合;最后进行重构得到融合图像。对红外与可见光图像进行了融合实验,并与基于像素的àtrous小波变换和Contourlet变换的融合效果进行了比较。结果表明,采用本文算法的融合图像既保留了可见光图像的光谱信息,又继承了红外图像的目标信息,其熵值高于基于像素的融合方法约10%,交叉熵仅为基于像素的融合方法的1%左右。  相似文献   

16.
基于形态学4子带分解金字塔的图像融合   总被引:3,自引:0,他引:3  
赵鹏  浦昭邦 《光学学报》2007,27(1):40-44
提出了一种基于数学形态学滤波的多分辨力图像融合。这种融合方法使用了形态学开闭运算构造了低通与高通滤波器,将原始图像分解为4子带图像金字塔和4子带方向衬比度图像金字塔。然后利用方向衬比度和区域标准差进行图像融合得到融合的4子带图像金字塔,最后应用子带图像重构得到融合图像。融合实验表明,该方法优于传统的形态学金字塔图像融合,衬比度金字塔图像融合和小波分解图像融合。  相似文献   

17.
基于目标识别的红外与微光图像融合方法   总被引:1,自引:1,他引:0       下载免费PDF全文
 为了在融合图像中突出运动目标,提出了一种基于动态目标检测和识别的图像融合算法。先对红外图像序列中的运动目标进行检测和提取,同时对红外和微光图像进行融合,最后将提取到的红外目标与融合图像进行二次融合。试验结果表明,该算法获得的融合图像不仅具有普通融合算法信息丰富的特点,还具有鲜明的红外目标指示特性。  相似文献   

18.
基于支持度变换和top-hat分解的双色中波红外图像融合   总被引:1,自引:0,他引:1  
为了解决用多尺度top-hat分解法融合双色中波红外图像时经常存在对比度提升有限、边缘区域失真较重的问题,提出了基于支持度变换和top-hat分解相结合的融合方法。先用支持度变换法将双色中波图像分解为低频图像和支持度图像序列;再从最后一层低频图像中用多尺度top-hat分解法提取各自的亮信息和暗信息;用灰度值取大法分别融合亮信息和暗信息;通过灰度值归一化和高斯滤波分别增强亮、暗信息融合图像;然后融合两低频图像和亮、暗信息增强图像;将融合图像作为新的低频图像和用灰度值取大法融合得到的支持度融合图像序列进行支持度逆变换,得到最终融合图像。该方法的实验结果同采用单一的支持度变换法融合和多尺度top-hat分解法融合相比,融合图像的对比度提升了11.69%,失真度降低了63.42%,局部粗糙度提高了38.12%。说明提出的从低频图像提取亮暗信息,并经过分别融合、增强,再与低频图像进行融合,能有效破解红外融合图像对比度提升和边缘区域失真度降低之间的矛盾,为提高图像融合质量提供了新方法。  相似文献   

19.
To solve the fusion problem of the multifocus images of the same scene, a novel algorithm based on focused region detection and multiresolution is proposed. In order to integrate the advantages of spatial domain-based fusion methods and transformed domain-based fusion methods, we use a technique of focused region detection and a new fusion method of multiscale transform (MST) to guide pixel combination. Firstly, the initial fused image is acquired with a novel multiresolution image fusion method. The pixels of the original images, which are similar to the corresponding initial fused image pixels, are considered to be located in the sharply focused regions. By this method, the initial focused regions can be determined, and the techniques of morphological opening and closing are employed for post-processing. Then the pixels within the focused regions in each source image are selected as the pixels of the fused image; meanwhile, the initial fused image pixels which are located at the focused border regions are retained as the pixels of the final fused image. The fused image is then obtained. The experimental results show that the proposed fusion approach is effective and performs better in fusing multi-focus images than some current methods.  相似文献   

20.
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号