首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
提出了一个结合层次遮挡图像缓存的快速消隐绘制算法,本算法首先利用空间连贯性对场景实行快速保守的消隐,对可能可见的近景、中景使用几何绘制,对可能可见的远景实现了基于图像和几何混合的加速绘制,实验表明,由于充分利用了空间连贯性和图像简化技术,本算法效果良好,可适合各种复杂度场景的快速绘制。  相似文献   

2.
An output-sensitive visibility algorithm is one whose runtime is proportional to the number of visible graphic primitives in a scene model—not to the total number of primitives, which can be much greater. The known practical output-sensitive visibility algorithms are suitable only for static scenes, because they include a heavy preprocessing stage that constructs a spatial data structure which relies on the model objects’ positions. Any changes to the scene geometry might cause significant modifications to this data structure. We show how these algorithms may be adapted to dynamic scenes. Two main ideas are used: first, update the spatial data structure to reflect the dynamic objects’ current positions; make this update efficient by restricting it to a small part of the data structure. Second, use temporal bounding volumes (TBVs) to avoid having to consider every dynamic object in each frame. The combination of these techniques yields efficient, output-sensitive visibility algorithms for scenes with multiple dynamic objects. The performance of our methods is shown to be significantly better than previous output-sensitive algorithms, intended for static scenes. TBVs can be adapted to applications where no prior knowledge of the objects’ trajectories is available, such as virtual reality (VR), simulations etc. Furthermore, they save updates of the scene model itself; notjust of the auxiliary data structure used by the visibility algorithm. They can therefore be used to greatly reduce the communications overhead in client-server VR systems, as well as in general distributed virtual environments.  相似文献   

3.
李明  鹿朋  朱龙  朱美强  邹亮 《控制与决策》2023,38(10):2867-2874
针对当前抓取检测模型对密集遮挡物体的检测效果差以及人工数据标注工作量大的问题,提出基于RGB-D图像融合的目标检测与抓取检测分步骤进行的改进方案.新方案支持将单物体图像训练的抓取检测模型直接应用于密集遮挡的多物体图像场景中.首先,考虑到密集遮挡场景下抓取物具有多尺度的特点,提出子阶段路径聚合(SPA)的多尺度特征融合模块,用于丰富RGB-D特征级别融合的目标检测模型SPA-YOLO-Fusion的高维语义特征信息,以便于检测模型定位所有的抓取物;其次,使用基于RGB-D像素级别融合的GR-ConvNet抓取检测模型估计每个物体的抓取点,并提出背景填充的图像预处理算法来降低密集遮挡物体的相互影响;最后,使用机械臂对目标点进行抓取.在LineMOD数据集上对目标检测模型进行测试,实验结果表明SPA-YOLO-Fusion的mAP比YOLOv3-tiny与YOLOv4-tiny分别提高了10%与7%.从实际场景中采集图像制作YODO_Grasp抓取检测数据集并进行测试,结果表明增加背景填充预处理算法的GR-ConvNet的抓取检测精度比原模型提高了23%.  相似文献   

4.
Dynamic scene occlusion culling   总被引:3,自引:0,他引:3  
Large, complex 3D scenes are best rendered in an output-sensitive way, i.e., in time largely independent of the entire scene model's complexity. Occlusion culling is one of the key techniques for output-sensitive rendering. We generalize existing occlusion culling algorithms, intended for static scenes, to handle dynamic scenes having numerous moving objects. The data structure used by an occlusion culling method is updated to reflect the objects' possible positions. To avoid updating the structure for every dynamic object at each frame, a temporal bounding volume (TBV) is created for each occluded dynamic object, using some known constraints on the object's motion. The TBV is inserted into the structure instead of the object. Subsequently, the object is ignored as long as the TBV is occluded and guaranteed to contain the object. The generalized algorithms' rendering time is linearly affected only by the scene's visible parts, not by hidden parts or by occluded dynamic objects. Our techniques also save communications in distributed graphic systems, e.g., multiuser virtual environments, by eliminating update messages for hidden dynamic objects. We demonstrate the adaptation of two occlusion culling algorithms to dynamic scenes: hierarchical Z-buffering and BSP tree projection  相似文献   

5.
The intention of the strategy proposed in this paper is to solve the object retrieval problem in highly complex scenes using 3D information. In the worst case scenario the complexity of the scene includes several objects with irregular or free-form shapes, viewed from any direction, which are self-occluded or partially occluded by other objects with which they are in contact and whose appearance is uniform in intensity/color. This paper introduces and analyzes a new 3D recognition/pose strategy based on DGI (Depth Gradient Images) models. After comparing it with current representative techniques, we can affirm that DGI has very interesting prospects.The DGI representation synthesizes both surface and contour information, thus avoiding restrictions concerning the layout and visibility of the objects in the scene. This paper first explains the key concepts of the DGI representation and shows the main properties of this method in comparison to a set of known techniques. The performance of this strategy in real scenes is then reported. Details are also presented of a wide set of experimental tests, including results under occlusion, performance with injected noise and experiments with cluttered scenes of a high level of complexity.  相似文献   

6.
In scientific illustrations and visualization, cutaway views are often employed as an effective technique for occlusion management in densely packed scenes. We propose a novel method for authoring cutaway illustrations of mesoscopic biological models. In contrast to the existing cutaway algorithms, we take advantage of the specific nature of the biological models. These models consist of thousands of instances with a comparably smaller number of different types. Our method constitutes a two stage process. In the first step, clipping objects are placed in the scene, creating a cutaway visualization of the model. During this process, a hierarchical list of stacked bars inform the user about the instance visibility distribution of each individual molecular type in the scene. In the second step, the visibility of each molecular type is fine‐tuned through these bars, which at this point act as interactive visibility equalizers. An evaluation of our technique with domain experts confirmed that our equalizer‐based approach for visibility specification is valuable and effective for both, scientific and educational purposes.  相似文献   

7.
Many 3D scenes (e.g. generated from CAD data) are composed of a multitude of objects that are nested in each other. A showroom, for instance, may contain multiple cars and every car has a gearbox with many gearwheels located inside. Because the objects occlude each other, only few are visible from outside. We present a new technique, Spherical Visibility Sampling (SVS), for real‐time 3D rendering of such – possibly highly complex – scenes. SVS exploits the occlusion and annotates hierarchically structured objects with directional visibility information in a preprocessing step. For different directions, the directional visibility encodes which objects of a scene's region are visible from the outside of the regions' enclosing bounding sphere. Since there is no need to store a separate view space subdivision as in most techniques based on preprocessed visibility, a small memory footprint is achieved. Using the directional visibility information for an interactive walkthrough, the potentially visible objects can be retrieved very efficiently without the need for further visibility tests. Our evaluation shows that using SVS allows to preprocess complex 3D scenes fast and to visualize them in real time (e.g. a Power Plant model and five animated Boeing 777 models with billions of triangles). Because SVS does not require hardware support for occlusion culling during rendering, it is even applicable for rendering large scenes on mobile devices.  相似文献   

8.
目的 杂乱场景下的物体抓取姿态检测是智能机器人的一项基本技能。尽管六自由度抓取学习取得了进展,但先前的方法在采样和学习中忽略了物体尺寸差异,导致在小物体上抓取表现较差。方法 提出了一种物体掩码辅助采样方法,在所有物体上采样相同的点以平衡抓取分布,解决了采样点分布不均匀问题。此外,学习时采用多尺度学习策略,在物体部分点云上使用多尺度圆柱分组以提升局部几何表示能力,解决了由物体尺度差异导致的学习抓取操作参数困难问题。通过设计一个端到端的抓取网络,嵌入了提出的采样和学习方法,能够有效提升物体抓取检测性能。结果 在大型基准数据集GraspNet-1Billion上进行评估,本文方法取得对比方法中的最优性能,其中在小物体上的抓取指标平均提升了7%,大量的真实机器人实验也表明该方法具有抓取未知物体的良好泛化性能。结论 本文聚焦于小物体上的抓取,提出了一种掩码辅助采样方法嵌入到提出的端到端学习网络中,并引入了多尺度分组学习策略提高物体的局部几何表示,能够有效提升在小尺寸物体上的抓取质量,并在所有物体上的抓取评估结果都超过了对比方法。  相似文献   

9.
Multimedia Tools and Applications - Rainy weather greatly affects the visibility of salient objects and scenes in the captured images and videos. The object/scene visibility varies with the type of...  相似文献   

10.
This paper presents a computational model to recover the most likely interpretation of the 3D scene structure from a planar image, where some objects may occlude others. The estimated scene interpretation is obtained by integrating some global and local cues and provides both the complete disoccluded objects that form the scene and their ordering according to depth. Our method first computes several distal scenes which are compatible with the proximal planar image. To compute these different hypothesized scenes, we propose a perceptually inspired object disocclusion method, which works by minimizing the Euler’s elastica as well as by incorporating the relatability of partially occluded contours and the convexity of the disoccluded objects. Then, to estimate the preferred scene, we rely on a Bayesian model and define probabilities taking into account the global complexity of the objects in the hypothesized scenes as well as the effort of bringing these objects in their relative position in the planar image, which is also measured by an Euler’s elastica-based quantity. The model is illustrated with numerical experiments on, both, synthetic and real images showing the ability of our model to reconstruct the occluded objects and the preferred perceptual order among them. We also present results on images of the Berkeley dataset with provided figure-ground ground-truth labeling.  相似文献   

11.
12.
13.
The efficient evaluation of visibility in a three‐dimensional scene is a longstanding problem in computer graphics. Visibility evaluations come in many different forms: figuring out what object is visible in a pixel; determining whether a point is visible to a light source; or evaluating the mutual visibility between 2 surface points. This paper provides a new, experimental view on visibility, based on a probabilistic evaluation of the visibility function. Instead of checking the visibility against all possible intervening geometry the visibility between 2 points is now evaluated by testing only a random subset of objects. The result is not a Boolean value that is either 0 or 1, but a numerical value that can even be negative. Because we use the visibility evaluation as part of the integrand in illumination computations, the probabilistic evaluation of visibility becomes part of the Monte Carlo procedure of estimating the illumination integral, and results in an unbiased computation of illumination values in the scene. Moreover, the number of intersections tests for any given ray is decreased, since only a random selection of geometric primitives is tested. Although probabilistic visibility is an experimental and new idea, we present a practical algorithm for direct illumination that uses the probabilistic nature of visibility evaluations.  相似文献   

14.
In this paper, we investigate the problem of determining regions in 3D scene visible to some given viewpoints when obstacles are present in the scene. We assume that the obstacles are composed of some opaque objects with closed surfaces. The problem is formulated in an implicit framework where the obstacles are represented by a level set function. The visible and invisible regions of the given viewpoints are determined through an efficient implicit ray tracing technique. As an extension of our approach, we apply the multiview visibility estimation to an image-based modeling technique. The unknown scene geometry and multiview visibility information are incorporated into a variational energy functional. By minimizing the energy functional, the true scene geometry as well as the accurate visibility information of the multiple views can be recovered from a number of scene images. This makes it feasible to handle the visibility problem of multiple views by our approach when the true scene geometry is unknown.  相似文献   

15.
In applications of augmented reality like virtual studio TV production, multisite video conference applications using a virtual meeting room and synthetic/natural hybrid coding according to the new ISO/MPEG-4 standard, a synthetic scene is mixed into a natural scene to generate a synthetic/natural hybrid image sequence. For realism, the illumination in both scenes should be identical. In this paper, the illumination of the natural scene is estimated automatically and applied to the synthetic scene. The natural scenes are restricted to scenes with nonoccluding, simple, moving, mainly rigid objects. For illumination estimation, these natural objects are automatically segmented in the natural image sequence and three-dimensionally (3-D) modeled using ellipsoid-like models. The 3-D shape, 3-D motion, and the displaced frame difference between two succeeding images are evaluated to estimate three illumination parameters. The parameters describe a distant point light source and ambient light. Using the estimated illumination parameters, the synthetic scene is rendered and mixed to the natural image sequence. Experimental results with a moving virtual object mixed into real video telephone sequences show that the virtual object appears naturally having the same shading and shadows as the real objects. Further, shading and shadow allows the viewer to understand the motion trajectory of the objects much better  相似文献   

16.
为了加速大规模虚拟场景的渲染速度,采用基于面向对象八叉树的方法对场景进行渲染。该方法将面向对象技术与传统八叉树技术相结合,采用面向对象八叉树剖分虚拟场景,对场景进行管理;将物体结构树的最小零部件作为最小存储单元,采用叶节点保存对象信息,减小树的存储量和处理时间,降低算法的计算负担;在面向对象八叉树的基础上,采用模型遮挡裁剪算法对位于视域范围内的模型进行遮挡裁剪,减小实际渲染的物体数量,提高渲染速率。通过对飞机虚拟维修场景进行渲染实验,证明了该方法的有效性。  相似文献   

17.
3D object pose estimation for robotic grasping and manipulation is a crucial task in the manufacturing industry. In cluttered and occluded scenes, the 6D pose estimation of the low-textured or textureless industrial object is a challenging problem due to the lack of color information. Thus, point cloud that is hardly affected by the lighting conditions is gaining popularity as an alternative solution for pose estimation. This article proposes a deep learning-based pose estimation using point cloud as input, which consists of instance segmentation and instance point cloud pose estimation. The instance segmentation divides the scene point cloud into multiple instance point clouds, and each instance point cloud pose is accurately predicted by fusing the depth and normal feature maps. In order to reduce the time consumption of the dataset acquisition and annotation, a physically-simulated engine is constructed to generate the synthetic dataset. Finally, several experiments are conducted on the public, synthetic and real datasets to verify the effectiveness of the pose estimation network. The experimental results show that the point cloud based pose estimation network can effectively and robustly predict the poses of objects in cluttered and occluded scenes.  相似文献   

18.
复杂场景的高效绘制是计算机图形学研究中的一个重要内容,目前的研究大多集中在对场景组织及可见性裁剪等降低场景几何复杂度的加速方法上,而针对场景中对象的多属性管理及绘制优化方法的研究较少.由于复杂场景中的对象除了基本几何信息外还包括光照、材质、纹理等多种绘制属性,这些属性的频繁切换将对绘制效率产生极大的影响,进而降低绘制的实时性.为此,提出一种基于状态转换优化策略的多属性对象绘制方法,通过定义绘制状态对场景中对象的属性进行管理,并将绘制状态的转换关系表示为带权有向图,进而利用最优化算法找到绘制状态转换的优化序列.实验结果表明,对于具有多属性对象的复杂场景,该方法能够有效地降低状态转换的开销,提高场景的实时绘制效率.  相似文献   

19.
In urban design, estimating solar exposure on complex city models is crucial but existing solutions typically focus on simplified building models and are too demanding in terms of memory and computational time. In this paper, we propose an interactive technique that estimates solar exposure on detailed urban scenes. Given a directional exposure map computed over a given time period, we estimate the sky visibility factor that serves to evaluate the final exposure at each visible point. This is done using a screen‐space method based on a two‐scale approach, which is geometry independent and has low storage costs. Our method performs at interactive rates and is designer‐oriented. The proposed technique is relevant in architecture and sustainable building design as it provides tools to estimate the energy performance of buildings as well as weathering effects in urban environments.  相似文献   

20.
Raycasting体绘制算法由于成像质量高而被广泛应用于体数据的可视化,但当线、面表达的矢量数据和三维栅格表达的体数据同时绘制到同一场景时,由于绘制方法的差异会造成矢量和栅格数据空间遮挡关系不一致。在GPU实现Raycasting算法的基础上,通过矢量和栅格数据先后绘制,采用FBO离屏绘制等技术将矢量数据绘制到深度缓存纹理并在体绘制采样和融合中统一考虑矢栅颜色融合。实验结果表明,该算法在矢量数据非透明模式下能正确处理矢量栅格数据的混合绘制。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号