首页 | 官方网站   微博 | 高级检索  
     


A novel video fusion framework using surfacelet transform
Authors:Qiang Zhang  Long Wang  Zhaokun Ma  Huijuan Li
Affiliation:1. Fundamental Science on Radioactive Geology and Exploration Technology Laboratory, East China Institute of Technology, Nanchang 330013, Jiangxi, China;2. National Engineering Laboratory for Offshore Oil Exploration, China University of Petroleum, 102249 Beijing, China
Abstract:A novel video fusion framework based on the three-dimensional surfacelet transform (3D-ST) is proposed in this paper. Different from the traditional individual-frame based video fusion methods, the proposed framework fused multi-frame images of input videos as a whole rather than frame by frame independently with the 3D-ST. Furthermore, under the proposed framework, two ST-based video fusion algorithms are proposed. In the first algorithm, no special treatment is performed on the temporal motion information in input videos, and only a spatial-temporal region energy-based fusion rule is employed. While in the second algorithm, a modified z-score based motion detection is performed to distinguish the temporal motion information from the spatial geometry information, and then a motion-based fusion rule is present. Experimental results demonstrate that, with the motion selectivity of the 3D-ST, existing static image fusion rules can be extended to video fusion under the proposed framework. Both of the two proposed fusion algorithms significantly outperform some traditional individual-frame based and motion-based methods in spatial-temporal information extraction as well as in temporal stability and consistency. In addition, the second proposed algorithm is with high computation efficiency and can be applied to real-time video fusion.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号