首页 | 官方网站   微博 | 高级检索  
     

基于多视频的虚实融合系统
引用本文:潘成伟,张建国,王少荣,汪国平.基于多视频的虚实融合系统[J].软件学报,2016,27(S2):197-206.
作者姓名:潘成伟  张建国  王少荣  汪国平
作者单位:北京大学 信息科学技术学院, 北京 100871;北京市虚拟仿真与可视化工程研究中心, 北京 100871,北京大学 信息科学技术学院, 北京 100871;北京市虚拟仿真与可视化工程研究中心, 北京 100871,北京市虚拟仿真与可视化工程研究中心, 北京 100871,北京大学 信息科学技术学院, 北京 100871;北京市虚拟仿真与可视化工程研究中心, 北京 100871
基金项目:国家自然科学基金(61232014,61421062,61472010,61121002);国家科技支撑计划(2015BAK01B06);国家重点基础研究发展计划(973)(2015CB3518806)
摘    要:提出了一种基于多视频的虚实融合可视化系统的构建方法,旨在将真实世界中的图像和视频融合到虚拟场景中,用视频图像中的纹理和动态信息去丰富虚拟场景,提高虚拟环境的真实性,得到一种增强的虚拟环境.利用无人机采集图像来重建虚拟场景,并借助图像特征点的匹配来实现视频图像的注册.然后利用投影纹理映射技术,将图像投影到虚拟场景中.视频中的动态物体由于在虚拟环境中缺失对应的三维模型,直接投影,当视点发生变化时会产生畸变.首先检测和追踪这些物体,然后尝试使用多种显示方式来解决畸变问题.此外,系统还考虑有重叠区域的多视频之间的融合.实验结果表明,所构造的虚实融合环境是十分有益的.

关 键 词:三维重建  增强虚拟  投影纹理映射  背景建模  运动追踪
收稿时间:2016/5/10 0:00:00
修稿时间:9/7/2016 12:00:00 AM

Virtual-Real Fusion System Integrated with Multiple Videos
PAN Cheng-Wei,ZHANG Jian-Guo,WANG Shao-Rong and WANG Guo-Ping.Virtual-Real Fusion System Integrated with Multiple Videos[J].Journal of Software,2016,27(S2):197-206.
Authors:PAN Cheng-Wei  ZHANG Jian-Guo  WANG Shao-Rong and WANG Guo-Ping
Affiliation:School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China;Beijing Engineering Research Center for Virtual Simulation and Visualization, Beijing 100871, China,School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China;Beijing Engineering Research Center for Virtual Simulation and Visualization, Beijing 100871, China,Beijing Engineering Research Center for Virtual Simulation and Visualization, Beijing 100871, China and School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China;Beijing Engineering Research Center for Virtual Simulation and Visualization, Beijing 100871, China
Abstract:This paper proposes a method for constructing a virtual-real fusion system integrated with multiple videos aiming to create an augmented virtual environment, where images and videos captured from real world are fused to virtual scene. With the help of textures from images and motion from videos, the virtual environment is more realistic. Unmanned Aerial Vehicles are used to take photos and reconstruct the 3D virtual scene. By matching features, video frames can be registered to the virtual environment. Then images are projected to virtual scene with the method of projective texture mapping. Due to lack of the corresponding 3D models in the virtual environment, distortions will occur when images are directly projected and the viewpoint changes. This paper first detects and tracks those moving objects, then it gives multiple ways of displaying moving objects to solve the distortion problem. Fusion of multiple videos with overlapping areas in the virtual environment is also considered in this system. The experimental results show that the virtual-real fusion environment that is build based in this paper has lots of benefits and advantages.
Keywords:3D reconstruction  augmented virtual reality  projective texture mapping  background modeling  motion tracking
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号