首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
研究了虚拟演播室中,节目主持人实拍图象与计算机生成的虚拟环境的一种无缝合成技术,提出了用以视频对象分割边缘等分抽样点为型值点的闭合B样条曲线,建立视频对象的平面格网真实感图形的算法,并用于虚拟演播室中节目主持人图象的真实感图形建立,通过对单个节目主持人视频摄像输入的实验,对算法的可行性进行了验证,实验结果表明,该算法对视频对象做基于分割边缘线拟合的简单三维重建是有效的,且具有自适应性。  相似文献   

2.
基于阴影恢复形状的起皱织物表面形态重建研究   总被引:3,自引:0,他引:3       下载免费PDF全文
阴影恢复形状算法是计算机视觉中的一个重要研究课题 ,该算法应用于织物三维表面重建 ,为客观评价织物起皱等级奠定了基础 .提出了一种新的阴影恢复重建算法 ,并阐述了该算法的实现步骤和计算方法 .先利用合成图象对算法进行迭代计算并验证 ,获得了较为准确的重建图象 ,然后再结合真实模板进行重建 ,重建准确度较高 .同时说明了该算法可应用于真实织物的表面重建之中 ,并可从获得的织物三维轮廓数据中提取特征值 ,实验表明 ,这些特征值均可从不同侧面表征织物的褶皱程度 ,特征值与织物的褶皱程度基本呈线性相关  相似文献   

3.
交互式动态体绘制及其加速算法   总被引:4,自引:1,他引:4       下载免费PDF全文
体绘制三维成象法是一门新兴的3D采样数据场可视化技术,在医学成象和科学可视化领域有着极为广泛的应用,但由于3D数据量大,其使用往往受到巨大计算开销的限制,因此很多研究人员致力于静态体绘制加速算法的研究,并解决医学图象三维可视化中三维体数据显示速度与成象质量问题,因而提出了一种交互式动态体绘制算法,即从任意的视点距离和视线方向进行动态编制,并在分析其算法复杂度的基础上,提出一种新的加速算法,同时使得动态体绘制过程几乎达到实时的效果,经验证,这种算法比标准算法快4~5倍。  相似文献   

4.
陈昊升  张格  叶阳东 《软件学报》2016,27(10):2661-2675
针对快速三维建模中的室内外随动环境感知问题,提出一种基于光学图像的多粒度随动环境感知算法.该算法根据多种光学图像生成拟合真实三维环境的多粒度点云模型,然后通过概率八叉树压缩并统一表示已生成的多粒度三维模型.进而伴随相机轨迹每个时间节点,通过卡尔曼滤波动态融合多粒度点云模型的概率八叉树表示.最终生成唯一的时态融合概率八叉树三维模型,简称TFPOM,使TFPOM能够在较少的噪声影响下以任意粒度动态拟合真实环境.该算法配合剪枝和归并策略能够适应多粒度融合和多粒度表示的环境建模要求,有效压缩环境模型存储空间,实现鲁棒的随动环境感知,便于基于环境模型的视觉导航,增强现实等应用.实验结果表明,该算法能够在以可穿戴设备为代表的内含多种异构光学图像传感器,低计算效能的平台上实时得到充分拟合真实动态环境的多粒度TFPOM,基于该模型的视觉导航具有较小的轨迹误差.  相似文献   

5.
一种圆柱形全景图生成新算法及其实现   总被引:6,自引:0,他引:6  
利用基于图象的绘制(IBR)技术来建立具有真实感的虚拟环境是当前计算机图形学研究领域中的一项热门课题,也是虚拟现实技术中的关键技术。它以简单的图象合成代替基于三维几何的建模和渲染,加快了画面的显示速度,是建立特定三维场景的一种高效率的方法。基于这种方法,我们提出了一种生成圆柱形全景图的新算法,它通过快速查找
相邻两幅图象的匹配区域而实现图象的无缝拼接,建立圆柱形全景图。本算法计算速度快,效率高,能生成逼真的全景图。文章的最后给出了基于这种算法生成的部分全景图图实例。  相似文献   

6.
为了探讨适合于虚拟现实中大规模三维地形生成的算法,文章分析了均匀网格算法和ROAM算法的原理及其特点,基于Molehill渲染引擎采用两种不同的算法实施了地形数据的模拟与绘制。实验结果表明,由于ROAM算法可根据视点的位置动态地计算模型的细节层次,减少了每帧渲染多边形的数量,所以,提高了大规模地形数据的运算效率,适合于大规模三维地形场景的虚拟建模需求。  相似文献   

7.
通过对计算机药物虚拟筛选技术、Dock评分函数体系和PocketV.2评分方法的分析,研究Dock评分函数体系及其源代码、PocketV.2评分方法及其源代码,提出基于药效团的计算机药物虚拟筛选方法。利用模拟配体与受体真实相互作用的分子间距离及性质的点模式匹配算法,实现了基于药效团的计算机药物虚拟筛选。在超性能计算集群环境下运行,并对该算法进行了测试,结果表明具有较强的可靠性和较高的准确性。  相似文献   

8.
提出了一种针对眼镜、帽子等头部饰品的单幅图像真实感虚拟试戴技术,其关键在于虚拟饰品的三维注册和虚实图像的合成。首先提出了一种将人脸关键点检测与刚体姿态估计相结合,求解单幅图像中人脸三维注册信息的算法。然后阐述了借助像素颜色混合和深度缓冲检测技术解决虚实图像合成中遮挡关系和模型材质问题的方法。在AFLW(Annotated Facial Landmarks in the Wild)人脸数据库上对三维注册算法进行了量化测评,结果表明该算法的精度满足虚拟试戴技术的要求。在较大角度姿态变化以及部分遮挡条件下的实验结果表明,提出的虚拟试戴技术快速准确,试戴效果自然逼真。  相似文献   

9.
增强现实技术在虚拟演播室系统中的应用   总被引:4,自引:0,他引:4  
虚拟演播室是虚拟现实技术和视频合成技术相结合的产物,其场景是计算机生成的三维场景,由于人们对虚拟场景和复杂度的无限要求,使得场景的实时显示十分困难,使用基于图像的绘制技术构造虚拟空间能够较好地解决这个问题。在虚拟演播室中,演员需要与三维运动虚拟物体进行交互,运用增强现实技术,可以将三维虚拟物体与基于图象绘制的虚拟场景融合在一起。  相似文献   

10.
由真实环境中的现场图象进行三维环境建模是目前国际上研究的热点问题。本文依据合理的运动模型,提出和实现了由包含抖动的摄像机运动下的图象序列建立3D环境全景模型的两步法。首先通过运动滤波和运动分解获得运动稳定的图象序列,然后采用无特征提取的时空纹理方向精确估计、深度边界确定和遮挡恢复算法,建立全局自然景物的真实感三维环境模型。提出了2种三维全景图象的表示方法,即非阵列方式深度分层区域表示和阵列方式的深度分层布景表示,可用于机器人全局定位的自然路标提取和真实环境虚拟再现的图象合成。该研究推广和结合了外极面图象的方法和全景图象的方法,放宽了对运动的要求,从而可使该种方法适用于室外颠簸的道路环境。和现有运动分层方法相比,避免了该类方法迭代过程中的局部最小化问题,并具有计算和存储效率高,适应性强,算法鲁棒性好的优点。  相似文献   

11.
Calibration-free augmented reality in perspective   总被引:3,自引:0,他引:3  
This paper deals with video-based augmented reality and proposes an algorithm for augmenting a real video sequence with views of graphics objects without metric calibration of the video camera by representing the motion of the video camera in projective space. A virtual camera, by which views of graphics objects are generated, is attached to a real camera by specifying image locations of the world coordinate system of the virtual world. The virtual camera is decomposed into calibration and motion components in order to make full use of graphics tools. The projective motion of the real camera recovered from image matches has the function of transferring the virtual camera and makes the virtual camera move according to the motion of the real camera. The virtual camera also follows the change of the internal parameters of the real camera. This paper shows the theoretical and experimental results of our application of nonmetric vision to augmented reality  相似文献   

12.
一个基于全景图的虚拟环境漫游系统   总被引:5,自引:0,他引:5  
虚拟现实的实现有两种方法。传统上,使用三维图形学方法进行建模和绘制,这种方法需要繁琐的建模工作和昂贵的专用绘图硬件,而且用三维模型很难真实表现自然景观。基于图像的绘制是实现虚拟现实系统的一种新方法,它克服了三维图形方法的上述缺点,近年来得到了日益广泛的应用。文章讨论了一个基于图像的虚拟环境漫游系统的实现,分析了此类系统的模型,介绍了系统实现中摄像机定标、图像拼接、实时图像变换等关键技术。  相似文献   

13.
In this paper, we propose to study the integration of a new source of a priori information, which is the virtual 3D city model. We study this integration for two tasks: vehicles geo-localization and obstacles detection. A virtual 3D city model is a realistic representation of the evolution environment of a vehicle. It is a database of geographical and textured 3D data. We describe an ego-localization method that combines measurements of a GPS (Global Positioning System) receiver, odometers, a gyrometer, a video camera and a virtual 3D city model. GPS is often consider as the main sensor for localization of vehicles. But, in urban areas, GPS is not precise or even can be unavailable. So, GPS data are fused with odometers and gyrometer measurements using an Unscented Kalman Filter (UKF). However, during long GPS unavailability, localization with only odometers and gyrometer drift. Thus, we propose a new observation of the location of the vehicle. This observation is based on the matching between the current image acquired by an on-board camera and the virtual 3D city model of the environment. We also propose an obstacle detection method based on the comparison between the image acquired by the on-board camera and the image extracted from the 3D model. The following principle is used: the image acquired by the on-board camera contains the possible dynamic obstacles whereas they are absent from the 3D model. The two proposed concepts are tested on real data.  相似文献   

14.
Illumination is one of the key components in the creation of realistic renderings of scenes containing virtual objects. In this paper, we present a set of novel algorithms and data structures for visualization, processing and rendering with real world lighting conditions captured using High Dynamic Range (HDR) video. The presented algorithms enable rapid construction of general and editable representations of the lighting environment, as well as extraction and fitting of sampled reflectance to parametric BRDF models. For efficient representation and rendering of the sampled lighting environment function, we consider an adaptive (2D/4D) data structure for storage of light field data on proxy geometry describing the scene. To demonstrate the usefulness of the algorithms, they are presented in the context of a fully integrated framework for spatially varying image based lighting. We show reconstructions of example scenes and resulting production quality renderings of virtual furniture with spatially varying real world illumination including occlusions.  相似文献   

15.
Liu  Feng  Chen  Zhigang  Wang  Jie 《Multimedia Tools and Applications》2019,78(4):4527-4544

Traditional image object classification and detection algorithms and strategies cannot meet the problem of video image acquisition and processing. Deep learning deliberately simulates the hierarchical structure of human brain, and establishes the mapping from low-level signals to high-level semantics, so as to achieve hierarchical feature representation of data. Deep learning technology has powerful visual information processing ability, which has become the forefront technology and domestic and international research hotspots to deal with this challenge. In order to solve the problem of target space location in video surveillance system, time-consuming and other problems, in this paper, we propose the algorithm based on RNN-LSTM deep learning. At the same time, according to the principle of OpenGL perspective imaging and photogrammetry consistency, we use 3D scene simulation imaging technology, relying on the corresponding relationship between video images and simulation images we locate the target object. In the 3D virtual scene, we set up the virtual camera to simulate the imaging processing of the actual camera, and the pixel coordinates in the video image of the surveillance target are substituted into the simulation image, next, the spatial coordinates of the target are inverted by the inverse process of the virtual imaging. The experimental results show that the detection of target objects has high accuracy, which has an important reference value for outdoor target localization through video surveillance images.

  相似文献   

16.
Registering a virtual scene with a real scene captured by a video camera has a number of applications including visually guided robotic navigation, surveillance, military training and operation. The fundamental problem involves several challenging research issues including finding corresponding points between the virtual and the real scene and camera calibration. This paper presents our research in defining and mapping a set of reliable image features for registering the two imageries, extracting and selecting reliable control points for the construction of intrinsic and extrinsic camera parameters. A number of innovative algorithms are presented followed by extensive experimental analysis. An application of registering virtual database image with video image is presented. The algorithms we developed for calculating and matching linear structured features and selecting of reliable control points are applicable to image registration beyond virtual and real imageries.  相似文献   

17.
HVS:一种基于实景图象的虚拟现实系统   总被引:14,自引:4,他引:10  
大多数的虚拟现实比试在于计算机图形技术,先由多边形构造虚拟场景的三维几何模型,再由计算机根据用户的观察点和观察方向实时绘制出用户所看到的虚拟场景,HVS 另一种思路,由计算机自动拉接、变形与组织许多幅度散的实景图象或连续视频,生成虚拟场景,这种虚拟场人有照片质量的视觉效果,被我们称为虚拟实景空间,虚拟实时空间能为用户提供、后退、仰视、俯视、360度环境、近看、远看等漫游能力,运行它不需要高性能的图  相似文献   

18.
Modelling virtual cities dedicated to behavioural animation   总被引:2,自引:0,他引:2  
In order to populate virtual cities, it is necessary to specify the behaviour of dynamic entities such as pedestrians or car drivers. Since a complete mental model based on vision and image processing cannot be constructed in real time using purely geometrical information, higher levels of information are needed in a model of the virtual environment. For example, the autonomous actors of a virtual world would exploit the knowledge of the environment topology to navigate through it. In this article, we present a model of virtual urban environments using structures and information suitable for behavioural animations. Thanks to this knowledge, autonomous virtual actors can behave like pedestrians or car drivers in a complex city environment. A city modeler has been designed, using this model of urban environment, and enables complex urban environments for behavioural animation to be automatically produced.  相似文献   

19.
Automatic 3D animation generation techniques are becoming increasingly popular in different areas related to computer graphics such as video games and animated movies. They help automate the filmmaking process even by non professionals without or with minimal intervention of animators and computer graphics programmers. Based on specified cinematographic principles and filming rules, they plan the sequence of virtual cameras that the best render a 3D scene. In this paper, we present an approach for automatic movie generation using linear temporal logic to express these filming and cinematography rules. We consider the filming of a 3D scene as a sequence of shots satisfying given filming rules, conveying constraints on the desirable configuration (position, orientation, and zoom) of virtual cameras. The selection of camera configurations at different points in time is understood as a camera plan, which is computed using a temporal-logic based planning system (TLPlan) to obtain a 3D movie. The camera planner is used within an automated planning application for generating 3D tasks demonstrations involving a teleoperated robot arm on the the International Space Station (ISS). A typical task demonstration involves moving the robot arm from one configuration to another. The main challenge is to automatically plan the configurations of virtual cameras to film the arm in a manner that conveys the best awareness of the robot trajectory to the user. The robot trajectory is generated using a path-planner. The camera planner is then invoked to find a sequence of configurations of virtual cameras to film the trajectory.  相似文献   

20.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号