首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 968 毫秒
1.
针对基于投影纹理映射的虚实融合系统,提出了一种PTZ摄像机视频与三维模型实时配准的技术.选取PTZ摄像机若干特定姿态的子图像组成一张全景图像,进行最优匹配图像的搜索,用SURF图像配准的方法对实时视频图像进行透视变换,利用最优匹配图像的三维投影信息将实时视频图像精确投影到三维模型中.实验结果表明,该算法具有较高的准确性,适用于虚实融合系统中PTZ摄像机视频的三维配准.  相似文献   

2.
为了寻找不同视频之间的转换道路,提升观察者的方向感,提出一种虚实融合场景中的自动路径规划方法,首先提出一种视点评价方法,考虑在视频投影中的图像畸变带来的图像质量问题;然后设计了场景的视点采样方法和道路图,并基于视点评价结果为场景道路图的边定义权重;在确定视频的访问顺序之后,计算生成权重代价最小视点路径.在4个视频监控系统中验证了文中提出的视点评价方法和路径生成方法的有效性,证明其能够提升用户的虚拟观测体验.  相似文献   

3.
一种用于动态场景的全景表示方法   总被引:3,自引:0,他引:3  
杜威  李华 《计算机学报》2002,25(9):968-975
针对全景图无法表示动态场景这一问题,提出一种用于动态场景的全景图表示方法,将视频纹理和全景图结合起来,构造动态全景图。系统首先将一系列定点拍摄的图像拼接成全景图,然后用摄像机拍摄场景中周期或随机运动的物体,提取视频纹理,最后视频纹理与全景图对准并融合,生成动态全景图。动态全景图既保持静态全景图全视角漫游的优点,又使得场景具有动态的特征,极大地增强漫游的真实感。  相似文献   

4.
纺织品的真实感绘制是虚拟服装与试衣系统研究中的一项关键技术.采用图像图形结合的方法,提出了一种新的纺织品真实感绘制算法.对于场景中的实景,从原始照片中提取光照参数,将它与新纹理图相融合,利用色彩一致性理论和表面反射模型,达到类似于照片的展示效果.基于同样的理论研究,还可以模拟在纺织品的某些反射特性有所变化时的效果.对于原来场景中不存在的虚景,把用计算机生成的三维虚拟物体叠加到从真实世界获取的场景画面,应用实景提取的光照参数,并使用相应的光照模型对三维物体进行光照重计算.从而达到很好的虚实融合效果.  相似文献   

5.
目前的艺术化渲染算法会使结果图像具有较强的随机性,因此无法直接应用于立体图像的渲染.文章针对虚拟三维场景,给出了一个通用的艺术风格立体图像的渲染框架.首先对单个物体渲染包括双眼视域范围的艺术风格图像,在投影过程中同时记录模型顶点对应的纹理坐标;然后将艺术风格图像映射到物体表面生成艺术化效果的三维模型;最后将艺术化三维模型直接投影到左右眼相机生成立体图像.为创建用于纹理映射的艺术化纹理图像,提出了在透视投影中颠倒像素遮挡关系的方法来获取模型的双眼视域范围图像.该文提出的艺术化立体图像生成方法不仅可保证双眼图像的一致性,而且具有良好的通用性,可适用于各类已有的艺术化渲染算法.  相似文献   

6.
unity3d是一款通过对现实环境的模拟,用户在虚拟环境中可以感受到现实场景中的物体,可以观察到现实场景中所发生过的事情.Shader(着色器)实际上就是一小段程序,它负责将输入的Mesh(网格)以指定的方式和输入的贴图或者颜色等组合作用,然后输出.在虚拟环境下,人脑想到的物体都可以被创造出来,还可以通过各种尝试最终得到自己想要的一种结果.和视频、图片不同,用户不仅仅可以观看,而且可以跟电脑进行交互.通过鼠标对虚拟场景中虚拟物体的位置进行拖拉,通过键盘使虚拟物体在不同方向上移动,通过数据手套对虚拟物体进行抓取,用户可以尽情的和虚拟环境进行交互.  相似文献   

7.
针对目前基于视频融合的增强现实系统不能为用户提供感兴趣点及其周围综合地理信息的问题,开发一种3 DGIS与多视频融合的系统.介绍3DGIS与多视频融合的系统的框架,研究基于投影纹理映射的虚实融合技术、三维重建技术,实现多路视频信息与三维场景模型、地理信息、地物的各种属性信息的融合.  相似文献   

8.
增强现实技术的目的在于将计算机生成的虚拟物体叠加到真实场景中。实现良好的虚实融合需要对场景光照进行估算,针对高光场景,利用场景中的不同反射光信息对场景进行有效的光照估计,首先通过基于像素聚类方法的图像分解对图像进行反射光的分解,得到漫反射图和镜面反射图,对漫反射图进行进一步的本征图像分解,得到反照率图和阴影图;之后结合分解结果和场景深度对输入图像的光照信息进行计算;最后使用全局光照模型对虚拟物体进行渲染,可以得到虚实场景高度融合的光照效果。  相似文献   

9.
对传统增强现实系统中虚拟物体与真实物体难以进行碰撞交互的问题,提出一种对深度图像中的场景进行分割,并基于分割结果构建代理几何体的方法来实现虚、实物体的碰撞交互。采用Kinect等深度获取设备获取当前真实场景的彩色图像信息和深度图像信息;通过深度图像的法向聚类及平面拟合技术来识别出场景中的主平面区域;对除去主平面区域的其他聚类点云区域进行融合处理,得到场景中的其他主要物体区域;为识别出的主平面构建虚拟平面作为该平面的代理几何体,为分割出的物体构建包围盒来作为其代理几何体。将这些代理几何体叠加到真实物体上,并对之赋予物理属性,即可模拟实现虚拟物体与真实物体的碰撞交互。实验结果表明,该方法可有效分割简单场景,从而实现虚实交互。  相似文献   

10.
混合现实系统可以提供虚拟信息和真实环境实时叠加的虚实融合场景,在教育培训、文物保护、军事仿真、装备制造、手术医疗和展览展示等领域具有十分广阔的应用前景。混合现实系统首先利用标定数据构建虚拟摄像机模型,然后根据头部跟踪结果和虚拟摄像机位置实时绘制虚拟内容并将其叠加在真实环境中,用户通过虚实融合场景中渲染的图形化线索和虚拟物体特征感知其深度信息,但存在用于指导虚实融合场景绘制的视觉规律和感知理论匮乏、图形化线索可提供的绝对深度信息缺失和虚拟物体的渲染维度和特征指标不足等问题。本文分析了面向虚实融合场景绘制渲染的视觉规律,从用户感知的角度出发,围绕虚实融合场景中图形化线索绘制和虚拟物体渲染等展开综述,并对虚实融合场景中深度感知的研究趋势和重点进行展望和预测。  相似文献   

11.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps.  相似文献   

12.
This paper presents an efficient image-based approach to navigate a scene based on only three wide-baseline uncalibrated images without the explicit use of a 3D model. After automatically recovering corresponding points between each pair of images, an accurate trifocal plane is extracted from the trifocal tensor of these three images. Next, based on a small number of feature marks using a friendly GUI, the correct dense disparity maps are obtained by using our trinocular-stereo algorithm. Employing the barycentric warping scheme with the computed disparity, we can generate an arbitrary novel view within a triangle spanned by three camera centers. Furthermore, after self-calibration of the cameras, 3D objects can be correctly augmented into the virtual environment synthesized by the tri-view morphing algorithm. Three applications of the tri-view morphing algorithm are demonstrated. The first one is 4D video synthesis, which can be used to fill in the gap between a few sparsely located video cameras to synthetically generate a video from a virtual moving camera. This synthetic camera can be used to view the dynamic scene from a novel view instead of the original static camera views. The second application is multiple view morphing, where we can seamlessly fly through the scene over a 2D space constructed by more than three cameras. The last one is dynamic scene synthesis using three still images, where several rigid objects may move in any orientation or direction. After segmenting three reference frames into several layers, the novel views in the dynamic scene can be generated by applying our algorithm. Finally, the experiments are presented to illustrate that a series of photo-realistic virtual views can be generated to fly through a virtual environment covered by several static cameras.  相似文献   

13.
Achieving convincing visual consistency between virtual objects and a real scene mainly relies on the lighting effects of virtual-real composition scenes. The problem becomes more challenging in lighting virtual objects in a single real image. Recently,scene understanding from a single image has made great progress. The estimated geometry,semantic labels and intrinsic components provide mostly coarse information,and are not accurate enough to re-render the whole scene. However,carefully integrating the estimated coarse information can lead to an estimate of the illumination parameters of the real scene. We present a novel method that uses the coarse information estimated by current scene understanding technology to estimate the parameters of a ray-based illumination model to light virtual objects in a real scene. Our key idea is to estimate the illumination via a sparse set of small 3D surfaces using normal and semantic constraints. The coarse shading image obtained by intrinsic image decomposition is considered as the irradiance of the selected small surfaces. The virtual objects are illuminated by the estimated illumination parameters. Experimental results show that our method can convincingly light virtual objects in a single real image,without any pre-recorded 3D geometry,reflectance,illumination acquisition equipment or imaging information of the image.  相似文献   

14.
StOMP algorithm is well suited to large-scale underdetermined applications in sparse vector estimations. It can reduce computation complexity and has some attractive asymptotical statistical properties.However,the estimation speed is at the cost of accuracy violation. This paper suggests an improvement on the StOMP algorithm that is more efficient in finding a sparse solution to the large-scale underdetermined problems. Also,compared with StOMP,this modified algorithm can not only more accurately estimate parameters for the distribution of matched filter coefficients,but also improve estimation accuracy for the sparse vector itself. Theoretical success boundary is provided based on a large-system limit for approximate recovery of sparse vector by modified algorithm,which validates that the modified algorithm is more efficient than StOMP. Actual computations with simulated data show that without significant increment in computation time,the proposed algorithm can greatly improve the estimation accuracy.  相似文献   

15.
盛斌  吴恩华 《软件学报》2008,19(7):1806-1816
首先推导与归纳了图像三维变换中像素深度场的变换规律,同时提出了基于深度场和极线原则的像素可见性别方法,根据上述理论和方法,提出一种基于深度图像的建模与绘制(image-based modeling and rendering,简称IBMR)技术,称为虚平面映射.该技术可以基于图像空间内任意视点对场景进行绘制.绘制时,先在场景中根据视线建立若干虚拟平面,将源深度图像中的像素转换到虚平面上,然后通过对虚平面上像素的中间变换,将虚平面转换成平面纹理,再利用虚平面的相互拼接,将视点的成像以平面纹理映射的方式完成.新方法还能在深度图像内侧,基于当前视点快速获得该视点的全景图,从而实现视点的实时漫游.新方法视点运动空间大、存储需求小,且可以发挥图形硬件的纹理映射功能,并能表现物体表面的三维凹凸细节和成像视差效果,克服了此前类似算法的局限和不足.  相似文献   

16.
在网络仿真实验场景的构建中,复杂实体终端难以实现虚拟仿真,需要对实体终端进行接入,但现有接入方式部署复杂、网络性能存在瓶颈、不能适用于大规模网络场景。为有效解决上述问题,提出了一种基于SDN(softwaredefined-network)的虚实融合网络仿真构建方法,利用SDN控制器结合流表构造算法实现对虚实网络的链路管理和数据连通,并设计开发了一套用于虚实融合网络仿真的原型系统,通过SDN控制器实现云平台中虚拟实例与云外实体终端共同组网,构建虚实融合的网络仿真实验场景。利用该原型系统,实验证明了基于SDN的网络仿真方法可以实现云平台中虚拟实例与实体终端高效大规模组网,且具有良好的网络性能。  相似文献   

17.
Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. In this paper, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.  相似文献   

18.
In this paper, we describe a reconstruction method for multiple motion scenes, which are scenes containing multiple moving objects, from uncalibrated views. Assuming that the objects are moving with constant velocities, the method recovers the scene structure, the trajectories of the moving objects, the camera motion, and the camera intrinsic parameters (except skews) simultaneously. We focus on the case where the cameras have unknown and varying focal lengths while the other intrinsic parameters are known. The number of the moving objects is automatically detected without prior motion segmentation. The method is based on a unified geometrical representation of the static scene and the moving objects. It first performs a projective reconstruction using a bilinear factorization algorithm and, then, converts the projective solution to a Euclidean one by enforcing metric constraints. Experimental results on synthetic and real images are presented.  相似文献   

19.

Un-manned underwater exploration in unconstrained environment is a challenging and non-trivial problem. Manual analysis of large volume of images/videos captured by the underwater stations/vehicles is a major bottleneck for the underwater research community. Automated system for analyzing these videos is need of the hour for exploring the underwater space. In this paper, we present a method for extracting the shape of the objects present in the unconstrained underwater environment scenarios. The proposed method extracts the shape of the objects using saliency gradient based morphological active contour models. The uniqueness in the method is that the stopping condition for the active contour models is derived from the combination of saliency gradient with the gradient of the scene. As a result the proposed method is able to work in highly dynamic and unconstrained underwater environments. The results show that the proposed method is able to extract the shapes of the man-made as well as natural objects in these environmental conditions. The proposed method is able to detect shapes of multiple objects present in an underwater scene. The method is successful in extracting the shape of the occluded objects in such conditions. The results show that the proposed saliency gradient based morphological GAC extracts a minimum of 63% and average of 90% of the objects with misclassification rate of 4% whereas the saliency gradient based morphological ACWE extracts a minimum of 62% and average of 85% of the objects with misclassification rate of 4%.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号