Combining scene model and fusion for night video enhancement |
| |
Authors: | Jing Li Tao Yang Quan Pan Yongmei Cheng |
| |
Affiliation: | 1. School of Telecommunications Engineering, Xidian University, Xi'an 710071, China;School of Automation, Northwestern Polytechnical University, Xi'an 710072, China 2. School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China;School of Automation, Northwestern Polytechnical University, Xi'an 710072, China 3. School of Automation, Northwestern Polytechnical University, Xi'an 710072, China |
| |
Abstract: | This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminations. A unique characteristic of the algorithm is to separate the image context into two classes and estimate them in different ways. One class contains basic surrounding scene information and scene model, which is obtained via background modeling and object tracking in daytime video sequence. The other class is extracted from nighttime video, including frequently moving region, high illumination region and high gradient region. The scene model and pixel-wise difference method are used to segment the three regions. A shift-invariant discrete wavelet based image fusion technique is used to integral all those context information in the final result. Experiment results demonstrate that the proposed approach can provide much more details and meaningful information for nighttime video. |
| |
Keywords: | Night video enhancement Image fusion Background modeling Object tracking |
本文献已被 CNKI 维普 万方数据 SpringerLink 等数据库收录! |
| 点击此处可从《电子科学学刊(英文版)》浏览原始摘要信息 |
|
点击此处可从《电子科学学刊(英文版)》下载全文 |
|