首页 | 官方网站   微博 | 高级检索  
     


Multi-view video based multiple objects segmentation using graph cut and spatiotemporal projections
Authors:Qian Zhang  King Ngi Ngan
Affiliation:1. Dept. Electronics & Comp. Engin., Korea University, 5-1 Anam-dong, Sungbuk-gu, Seoul, Republic of Korea;2. Dept. of Electronics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464, Japan;3. Dept. of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 1406 W Green St - 702, Urbana, IL 61801-2991, Illinois, United States;4. Fraunhofer-Institut für Nachrichtentechnik, Heinrich-Hertz-Institut, Einsteinufer 37, Berlin, 10587, Germany;5. General Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge, MA 02139, Massachusetts, United States
Abstract:In this paper, we present an automatic algorithm to segment multiple objects from multi-view video. The Initial Interested Objects (IIOs) are automatically extracted in the key view of the initial frame based on the saliency model. Multiple objects segmentation is decomposed into several sub-segmentation problems, and solved by minimizing the energy function using binary label graph cut. In the proposed novel energy function, the color and depth cues are integrated with the data term, which is then modified with background penalty with occlusion reasoning. In the smoothness term, foreground contrast enhancement is developed to strengthen the moving objects boundary, and at the same time attenuates the background contrast. To segment the multi-view video, the coarse predictions of the other views and the successive frame are projected by pixel-based disparity and motion compensation, respectively, which exploits the inherent spatiotemporal consistency. Uncertain band along the object boundary is shaped based on activity measure and refined with graph cut, resulting in a more accurate Interested Objects (IOs) layer across all views of the frames. The experiments are implemented on a couple of multi-view videos with real and complex scenes. Excellent subjective results have shown the robustness and efficiency of the proposed algorithm.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号