首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 375 毫秒
1.
IBR(Image Based Rendering)作为计算机图形学一个重要领域,已有大量的技术提出以重现先前绘制的或实际的图像,如:通过扭曲输入图像在图像间插值,它们利用场景的深度信息或图像间的相关性在多幅图像间实现。光场渲染是在不需要图像的深度信息或相关性的条件下,通过相机把在任何一个位置,任何一个视角观察到的场景的样子拍摄下来,然后在渲染的过程中,对于每个视点简单地合成那些有用的图像,从而最终得到新的图像。在本文中我们将研究一套将光场渲染技术在虚拟展示中加以应用的软件实现解决方案。  相似文献   

2.
IBR(Image Based Renderin曲作为计算机图形学一个重要领域,已有大量的技术提出以重现先前绘制的或实际的图像,如:通过扭曲输入图像在图像间插值,它们利用场景的深度信息或图像间的相关性在多幅图像间实现。光场渲染是在不需要图像的深度信息或相关性的条件下,通过相机把在任何一个位置,任何一个视角观察到的场景的样子拍摄下来,然后在渲染的过程中,对于每个视点简单地合成那些有用的图像.从而最终得到新的图像。在本文中我们将研究一套将光场渲染技术在虚拟展示中加以应用的软件实现解决方案。  相似文献   

3.
提出一种基于光场与几何混合绘制的视频绘制算法.算法以光场绘制算法计算起始帧,以新视点的上一帧为基础更新前景;结合高斯混合背景建模和场景几何计算方法来计算新视点的当前帧的前景区域,避免了重复绘制占据图像大部分而变化缓慢的背景区域,提高了视频绘制的效率.为了消除累积误差,采用"起始帧+后续帧"的循环模式,同时在循环中统计前景点在场景中的分布,自适应地划分下一个循环的场景层次.实验结果表明,文中算法效率高,所生成的图像质量好.  相似文献   

4.
光场相机采用微透镜阵列测量场景中所有光线的辐射度和方向信息;因为它们捕捉的是一个场景中的四维光场信息,所以大大增强了当前商用相机性能;通过处理这些已经记录的光场信息,这些相机能够在拍摄后重新聚焦场景和获得三维信息;文章基于光场相机结合焦点堆栈变换,引入新的超分辨率焦点堆栈,提出了一种新的同时估计深度图和场景全聚焦超分辨率图像的技术;实验结果表明可以获得很好的重聚焦图像和深度图。  相似文献   

5.
目的 传统的基于子视点叠加的重聚焦算法混叠现象严重,基于光场图像重构的重聚焦方法计算量太大,性能提升困难。为此,本文借助深度神经网络设计和实现了一种基于条件生成对抗网络的新颖高效的端到端光场图像重聚焦算法。方法 首先以光场图像为输入计算视差图,并从视差图中计算出所需的弥散圆(circle of confusion,COC)图像,然后根据COC图像对光场中心子视点图像进行散焦渲染,最终生成对焦平面和景深与COC图像相对应的重聚焦图像。结果 所提算法在提出的仿真数据集和真实数据集上与相关算法进行评价比较,证明了所提算法能够生成高质量的重聚焦图像。使用峰值信噪比(peak signal to noise ratio,PSNR)和结构相似性(structural similarity,SSIM)进行定量分析的结果显示,本文算法比传统重聚焦算法平均PSNR提升了1.82 dB,平均SSIM提升了0.02,比同样使用COC图像并借助各向异性滤波的算法平均PSNR提升了7.92 dB,平均SSIM提升了0.08。结论 本文算法能够依据图像重聚焦和景深控制要求,生成输入光场图像的视差图,进而生成对应的COC图像。所提条件生成对抗神经网络模型能够依据得到的不同COC图像对输入的中心子视点进行散焦渲染,得到与之对应的重聚焦图像,与之前的算法相比,本文算法解决了混叠问题,优化了散焦效果,并显著降低了计算成本。  相似文献   

6.
现有光场采集系统存在角度信息欠采样的问题,由此引起的图像混叠效应降低了光场图像质量. 本文在2D光场框架下分析了光场图像混叠的空域模型,并且提出一种变换离散孔径采样密度的混叠检测方法. 该方法通过计算随机遮挡孔径(Random masked aperture,RMA)成像点集的变异系数进行混叠检测,特点在于无需已知场景深度和纹理的先验信息. 以平面相机阵列为采集平台,本文在多组真实光场数据集上进行了方法的验证,并且在检测结果基础上对混叠效应进行了修正.  相似文献   

7.
边界光场     
提出一种新的基于图像的绘制方法——边界光场.该方法基于3D全光函数的思想,并使之与场景几何相结合.该方法克服了已有的IBR漫游系统的一些缺陷,利用自适应的的全光采样模式,根据场景复杂度或用户要求组织采样数据,降低了场景数据量;由于场景几何的参与,纠正了较大的深度变形;新的采样数据组织模式去除了对漫游范围的限制.文中方法可有效地应用于虚拟或真实场景漫游系统中.  相似文献   

8.
针对现有光场图像存在混叠的问题,提出一种多尺度抗混叠绘制光场图像的方法.首先分析了光场角度降采样引起图像混叠的原因,然后给出角度采样率与图像混叠关系的数学描述,并且指出了影响图像混叠的相关因素.在此基础上,提出采用多尺度光场图像梯度融合的方法降低图像混叠,其优点在于无需场景深度先验信息.分别在虚拟光场数据和真实光场数据集上进行了抗混叠绘制实验,结果表明该方法优于经典光场绘制方法,能够得到与已知深度绘制方法相近的结果.  相似文献   

9.
《计算机科学与探索》2019,(11):1911-1924
图像重定向是图像编辑中的一个基本问题,主要研究根据具体应用要求对图像内容进行重构。当前针对光场图像角度和空间超分辨率以外的重定向的研究尚属空白。提出了一种光场图像基线编辑方法。算法主要包含三个步骤:首先对光场图像进行标定得到相机参数,并估计光场图像每个子视点图像的视差图;然后根据基线编辑的要求对光场图像进行重定向处理,即将每个子视点图像投影到目标光场图像对应的子视点;最后构建一个深度学习算法对目标光场图像进行优化,对直接重定向过程中因遮挡去除导致的图像空洞区域进行修复。实验结果表明,所提算法能够实现基线编辑的光场图像重定向处理,得到高质量的目标光场图像。该算法可用于一系列光场图像编辑应用,包括光场图像拼接、不同图像间物体拷贝和复制、光场图像的立体显示等。  相似文献   

10.
全局光照渲染技术在虚拟现实应用中日益普及,但其图像高分辨率采样带来的高时间成本严重影响用户感受.为解决上述问题,提出分段式卷积神经网络模型,对低分辨率采样渲染结果进行实时降噪并获得更高质量的渲染图像结果.该模型分为2段,针对已有降噪模型处理时序渲染结果序列时出现的不稳定性瓶颈,前段使用多层跳跃连接的循环卷积神经网络将渲染结果以序列为单位进行处理,保障降噪结果的时序稳定性;针对降噪模型在时序降噪中的效果瑕疵,后段串联多层渲染图像降噪卷积神经网络对处理结果进行优化;为加快模型训练速度并进一步提升降噪效果,使用低分辨率采样的场景反射率图、法线向量图、场景深度图、阴影图等渲染辅助图像信息作为辅助输入.所提模型综合了已有图像和视频降噪模型的优点,在5种自定义场景上的降噪实验结果表明,该模型具有良好的时序稳定性和降噪效果,镜面处噪点数量明显少于当前主流的OptiX降噪器;在降噪结果与目标图像的结构相似性(SSIM)指标上,与OptiX降噪器相比,该模型在5个场景中分别有5.8%, 12.2%, 1.5%, 4.7%和1.8%的提升.  相似文献   

11.
In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space–time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effects.  相似文献   

12.
The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors.  相似文献   

13.
Depth and visual hulls are useful for quick reconstruction and rendering of a 3D object based on a number of reference views. However, for many scenes, especially multi‐object, these hulls may contain significant artifacts known as phantom geometry. In depth hulls the phantom geometry appears behind the scene objects in regions occluded from all the reference views. In visual hulls the phantom geometry may also appear in front of the objects because there is not enough information to unambiguously imply the object positions. In this work we identify which parts of the depth and visual hull might constitute phantom geometry. We define the notion of reduced depth hull and reduced visual hull as the parts of the corresponding hull that are phantom‐free. We analyze the role of the depth information in identification of the phantom geometry. Based on this, we provide an algorithm for rendering the reduced depth hull at interactive frame‐rates and suggest an approach for rendering the reduced visual hull. The rendering algorithms take advantage of modern GPU programming techniques. Our techniques bypass explicit reconstruction of the hulls, rendering the reduced depth or visual hull directly from the reference views.  相似文献   

14.
Indirect illumination involving with visually rich participating media such as turbulent smoke and loud explosions contributes significantly to the appearances of other objects in a rendering scene. However, previous real‐time techniques have focused only on the appearances of the media directly visible from the viewer. Specifically, appearances that can be indirectly seen over reflective surfaces have not attracted much attention. In this paper, we present a real‐time rendering technique for such indirect views that involves the participating media. To achieve real‐time performance for computing indirect views, we leverage layered polygonal area lights (LPALs) that can be obtained by slicing the media into multiple flat layers. Using this representation, radiance entering each surface point from each slice of the volume is analytically evaluated to achieve instant calculation. The analytic solution can be derived for standard bidirectional reflectance distribution functions (BRDFs) based on the microfacet theory. Accordingly, our method is sufficiently robust to work on surfaces with arbitrary shapes and roughness values. In addition, we propose a quadrature method for more accurate rendering of scenes with dense volumes, and a transformation of the domain of volumes to simplify the calculation and implementation of the proposed method. By taking advantage of these computation techniques, the proposed method achieves real‐time rendering of indirect illumination for emissive volumes.  相似文献   

15.
目的 光场相机通过一次成像同时记录场景的空间信息和角度信息,获取多视角图像和重聚焦图像,在深度估计中具有独特优势。遮挡是光场深度估计中的难点问题之一,现有方法没有考虑遮挡或仅仅考虑单一遮挡情况,对于多遮挡场景点,方法失效。针对遮挡问题,在多视角立体匹配框架下,提出了一种对遮挡鲁棒的光场深度估计算法。方法 首先利用数字重聚焦算法获取重聚焦图像,定义场景的遮挡类型,并构造相关性成本量。然后根据最小成本原则自适应选择最佳成本量,并求解局部深度图。最后利用马尔可夫随机场结合成本量和平滑约束,通过图割算法和加权中值滤波获取全局优化深度图,提升深度估计精度。结果 实验在HCI合成数据集和Stanford Lytro Illum实际场景数据集上展开,分别进行局部深度估计与全局深度估计实验。实验结果表明,相比其他先进方法,本文方法对遮挡场景效果更好,均方误差平均降低约26.8%。结论 本文方法能够有效处理不同遮挡情况,更好地保持深度图边缘信息,深度估计结果更准确,且时效性更好。此外,本文方法适用场景是朗伯平面场景,对于含有高光的非朗伯平面场景存在一定缺陷。  相似文献   

16.
The advancements in three-dimensional (3D) display technology have led to a wide interest in light-field display. However, the need to simultaneously capture a large number of object views made content generation for light-field displays still a bottleneck. In this paper, we propose a method for light-field content generation based on plane-depth-fused sweep volume (PDFSV), focusing on handling wide-baseline views and exhibiting scene generalization when the camera array remains unchanged. Specifically, the proposed PDFSV exploits the prior depth of the images captured by a 4 × 4 spherical camera array to represent 3D information of scenes. Then two optimized sequential convolutional neural networks (CNN) are employed for implicit depth modeling and final color calculation, respectively. By doing these, the prior depth facilitates the synthesis of regions with complex textures in the target view. We produce a Wide-baseline Multi-view Image Set (WMIS) which has a field of view (FOV) angle reaching 54°and could be publicly available for access. In our experiments, we use only the 4 vertex views as input. Results demonstrate that the proposed approach can synthesize high-quality views at arbitrary positions between sparse views, outperforming existing neural-radiance-fields-based (NeRF-based) methods. Finally, we conduct autostereoscopic display experiments, achieving satisfactory results.  相似文献   

17.
Recently, various techniques of shape reconstruction using cast shadows have been proposed. These techniques have the advantage that they can be applied to various scenes, including outdoor scenes, without using special devices. Previously proposed techniques usually require calibration of camera parameters and light source positions, and such calibration processes limit the range of application of these techniques. In this paper, we propose a method to reconstruct 3D scenes even when the camera parameters or light source positions are unknown. The technique first recovers the shape with 4-DOF indeterminacy using coplanarities obtained by cast shadows of straight edges or visible planes in a scene, and then upgrades the shape using metric constraints obtained from the geometrical constraints in the scene. In order to circumvent the need for calibrations and special devices, we propose both linear and nonlinear methods in this paper. Experiments using simulated and real images verified the effectiveness of this technique.  相似文献   

18.
19.
This paper presents a novel method for virtual view synthesis that allows viewers to virtually fly through real soccer scenes, which are captured by multiple cameras in a stadium. The proposed method generates images of arbitrary viewpoints by view interpolation of real camera images near the chosen viewpoints. In this method, cameras do not need to be strongly calibrated since projective geometry between cameras is employed for the interpolation. For avoiding the complex and unreliable process of 3-D recovery, object scenes are segmented into several regions according to the geometric property of the scene. Dense correspondence between real views, which is necessary for intermediate view generation, is automatically obtained by applying projective geometry to each region. By superimposing intermediate images for all regions, virtual views for the entire soccer scene are generated. The efforts for camera calibration are reduced and correspondence matching requires no manual operation; hence, the proposed method can be easily applied to dynamic events in a large space. An application for fly-through observations of soccer match replays is introduced along with the algorithm of view synthesis and experimental results. This is a new approach for providing arbitrary views of an entire dynamic event.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号