首页 | 官方网站   微博 | 高级检索  
     

基于优化字典学习的遥感图像融合方法
引用本文:刘帆,裴晓鹏,张静,陈泽华.基于优化字典学习的遥感图像融合方法[J].电子与信息学报,2018,40(12):2804-2811.
作者姓名:刘帆  裴晓鹏  张静  陈泽华
基金项目:国家自然科学基金(61703299, 61402319, 61403273),山西省自然科学基金(201601D202044)
摘    要:为提升全色图像和多光谱图像的融合效果,该文提出基于优化字典学习的遥感图像融合方法。首先将经典图像库中的图像分块作为训练样本,对其进行K均值聚类,根据聚类结果适度裁减数量较多且相似度较高的图像块,减少训练样本个数。接着对裁减后的训练样本进行训练,得到通用性字典,并标记出相似字典原子和较少使用的字典原子。然后用与原稀疏模型差异最大的全色图像块规范化后替换相似字典原子和较少使用的字典原子,得到自适应字典。使用自适应字典对多光谱图像经IHS变换后获取的亮度分量和源全色图像进行稀疏表示,把每一个图像块稀疏系数中的模极大值系数分离,得到极大值稀疏系数,将剩下的稀疏系数称为剩余稀疏系数。针对极大值稀疏系数和剩余稀疏系数分别选择不同的融合规则进行融合,以保留更多的光谱信息和空间细节信息,最后进行IHS逆变换获得融合图像。实验结果表明,与传统方法相比所提方法得到的融合图像主观视觉效果较好,且客观评价指标更优。

关 键 词:遥感图像融合    K均值聚类    自适应字典    稀疏表示    融合规则
收稿时间:2018-03-21

Remote Sensing Image Fusion Based on Optimized Dictionary Learning
Fan LIU,Xiaopeng PEI,Jing ZHANG,Zehua CHEN.Remote Sensing Image Fusion Based on Optimized Dictionary Learning[J].Journal of Electronics & Information Technology,2018,40(12):2804-2811.
Authors:Fan LIU  Xiaopeng PEI  Jing ZHANG  Zehua CHEN
Affiliation:1.College of Data Science, Taiyuan University of Technology, Taiyuan 030024, China2.College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan 030024, China3.College of Information and Computer, Taiyuan University of Technology, Taiyuan 030024, China
Abstract:In order to improve the fusion quality of panchromatic image and multi-spectral image, a remote sensing image fusion method based on optimized dictionary learning is proposed. Firstly, K-means cluster is applied to image blocks in the image database, and then image blocks with high similarity are removed partly in order to improve the training efficiency. While obtaining a universal dictionary, the similar dictionary atoms and less used dictionary atoms are marked for further research. Secondly, similar dictionary atoms and less used dictionary atoms are replaced by panchromatic image blocks with the largest difference from the original sparse model to obtain an adaptive dictionary. Furthermore the adaptive dictionary is used to sparse represent the intensity component and panchromatic image, the modulus maxima coefficients in the sparse coefficients of each image blocks are separated to obtain maximal sparse coefficients, and the remaining sparse coefficients are called residual sparse coefficients. Then, each part is fused by different fusion rules to preserve more spectral and spatial detail information. Finally, inverse IHS transform is employed to obtain the fused image. Experiments demonstrate that the proposed method provides better spectral quality and superior spatial information in the fused image than its counterparts.
Keywords:
点击此处可从《电子与信息学报》浏览原始摘要信息
点击此处可从《电子与信息学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号