首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
飞行时间3维相机的多视角散乱点云优化配准   总被引:1,自引:1,他引:0       下载免费PDF全文
针对目前基于飞行时间(TOF)原理的3维相机实现物体完整表面的3维点云重建过程中,多视角散乱点云配准精度低的问题,提出一种优化配准方法。该方法通过构建一个目标功能函数,并结合相邻点云的转换矩阵对该目标函数进行最小化求解,直接获取任意位置的点云到基准点云所处坐标系的绝对转换矩阵,避免了对连续点云之间的配准而引起误差的累加。对不同的物体进行实验,实验结果表明,该方法在保证点云配准速度的同时,提高了多视角点云配准的精度,物体点云模型重建效果较好,有利于实现后期3维曲面网格的重建。  相似文献   

2.
针对目前基于飞行时间(TOF)原理的三维相机实现物体完整表面的三维点云重建过程中,多视角散乱点云配准精度低的问题,本文提出了一种多视角散乱点云优化配准方法。该方法通过构建一个目标功能函数,并结合相邻点云的变换矩阵对该目标函数进行最小化求解,直接获取任意位置的点云到基准坐标系的绝对变换矩阵,避免了对相邻点云的变换矩阵进行累积而引起误差的累加。实验结果表明,该方法提高了多视角点云配准的精度,同时增强了物体点云模型重建的效果,在三维曲面重建中具有较强的实用性。  相似文献   

3.
针对三维重建时点云配准过程易受环境噪声、点云曝光、光照、物体遮挡等因素的影响,以及传统ICP配准算法配准精度低、耗时长等问题,提出一种基于自适应列文伯格-马夸尔特迭代式的点云配准方法。首先,对初始点云数据采用统计滤波和体素栅格滤波相结合的方式进行降噪预处理;然后,对滤波后的点云进行分层,剔除位于层外的外点数据,以提高后续点云配准的精度;针对传统点云特征描述方法计算量大的问题,使用平滑度参数提取点云特征,以提升点云配准的效率;最后,根据点云特征建立帧间点到线及点到面的约束关系,采用改进的列文伯格-马夸尔特(Levenberg-Marquardt)方法完成点云配准,构建较理想的三维重建模型。实验结果表明,提出的点云配准方法适用于室内及室外场景的三维重建,环境适应性强,且点云配准精度和效率都有较大提升。  相似文献   

4.
针对传统立体影像三维重建过程中面临的重建点云数目稀疏、视觉效果不稳定等问题,提出一种基于智能手机立体影像的稠密三维重建方法。该方法在传统影像三维重建的基础上引入密集匹配步骤,通过对手机设备进行相机标定、影像校正,对立体影像进行初始同名点的获取,并使用RANSAC方法剔除错误匹配点,得到高精度同名点作为匹配基元,进而密集匹配并联合三角测量方法进行三维点云解算,得到稠密三维点云。实验结果表明,该重建方法较传统重建方法生成的点云密集,可以清晰看到物体的纹理,视觉效果良好,并且对不同场景有着较好的适应性。  相似文献   

5.
曹煜  陈秀宏 《计算机工程》2012,38(5):224-226,229
针对自标定单幅图像的三维重建问题,提出一种基于侧影轮廓的图像三维重建方法。使用成角度平面镜装置拍摄目标物体,对所得图像进行边缘跟踪,得到闭合的轮廓曲线,利用满足平面镜成像原理的2对侧影轮廓计算灭点,根据灭点间的约束关系计算相机参数,在此基础上重建得到物体的三维模型。实验结果表明,该方法能快速重建逼真的三维模型。  相似文献   

6.
一种基于图像的室内大场景自动三维重建系统   总被引:3,自引:1,他引:2  
由于室内场景具有结构化的特点, 如人们习惯的平行、垂直、共线共面等, 在基于图像的室内场景自动重建中, 即使一些小的误差也会导致明显的视觉差异. 文献中对具有高保真的室内场景的自动重建系统尚少有报道. 针对犯罪现场三维复原的具体需求, 本文报道了一种基于图像的室内场景自动重建系统, 包括图像采集平台的标定, 特征点与特征直线的匹配与重建, 以及多视角下重建结果的融合等. 本系统有如下特点: 1)重建过程为全自动, 不需要任何人机交互; 2)直线特征的自动匹配与重建考虑了场景的深度与结构信息, 匹配的正确率及空间直线重建效果得到了显著提高; 3)重建结果的整体优化中, 融合了特征点与特征直线. 大量实验结果表明, 该系统方便实用, 且能得到比较好的重建效果.  相似文献   

7.
针对三维重建中的点云配准问题,提出一种基于点云特征的自动配准算法。利用微软Kinect传感器采集物体的多视角深度图像,提取目标区域并转化为三维点云。对点云进行滤波并估计快速点特征直方图特征,结合双向快速近似最近邻搜索算法得到初始对应点集,并使用随机采样一致性算法确定最终对应点集。根据奇异值分解法求出点云的变换矩阵初始值,在初始配准的基础上运用迭代最近点算法做精细配准。实验结果表明,该配准方法既保证了三维点云的配准质量,又降低了计算复杂度,具有较高的可操作性和鲁棒性。  相似文献   

8.
本文提出一种基于单幅人脸图像并结合标准肤色的人脸图像纹理合成和三维重建算法.首先,利用ASM算法提取人脸特征点,并通过基于局部线性嵌入算法的编辑传播实现颜色转换,使图像人脸色调与三维人脸模型标准肤色一致.接着,将人脸图像五官区域与标准肤色图进行泊松融合,并考虑眉毛遮挡情况,利用人脸对称性或眉毛模板还原眉毛.尤其对于半遮挡眉毛,采用Li模型和角点检测相结合的方法重建眉毛轮廓,得到最终人脸纹理图.最后通过纹理映射将人脸纹理图映射到三维人脸模型上,得到较好的个性化三维人脸重建效果.实验表明,本文算法能够适用于不同复杂背景和光照条件下拍摄的人脸图像,具有较快的处理速度,能够应用于人脸实时重建产品中.  相似文献   

9.
目的 真实物体的3维重建一直是计算机图形学、机器视觉等领域的研究热点。针对基于RGBD数据的非匀速非固定角度旋转物体的3维重建问题,提出一种利用旋转平台重建物体3维模型的配准方法。方法 首先通过Kinect采集位于旋转平台上目标物的深度数据和颜色数据,对齐融合并使用包围盒算法去除背景噪声和不需要的外部点云,获得带有颜色信息的点云数据。并使用基于标定物不同角度上的点云数据标定出旋转平台中心轴的位置,从而获得Kinect与旋转平台之间的相对关系;然后通过曲率特征对目标点云进行特征点提取并寻找与相邻点云的对应点;其中对于特征点的选取,首先针对点云中的任意一点利用kd-tree搜寻其k个邻近点,对这些点进行曲面拟合,进而计算其高斯曲率,将高斯曲率绝对值较大的n个点作为点云的特征点。n的取值由点云的点个数、点密度和复杂度决定,具体表现为能反映物体的大致轮廓或表面特征信息即可。对于对应点的选取,考虑到欧氏距离并不能较好反映点云中的点对在旋转过程中的对应关系,在实际配准中,往往会因为点云重叠或距离过远等原因找到大量错误的对应点。由于目标物在扫描过程中仅绕旋转轴进行旋转,因此采用圆弧最小距离寻找对应点可有效减少错误点对。随后,使用二分迭代寻找绕中心轴的最优旋转角度以满足点云间的匹配误差最小;最后,将任意角度获取的点云数据配准到统一的坐标系下并重建模型。结果 使用斯坦福大学点云数据库和自采集数据库分别对该方法和已有方法在算法效率和配准结果上进行对比实验,实验结果显示在拥有平均75 000个采样点的斯坦福大学点云数据库上与传统ICP算法和改进ICP算法相比,迭代次数分别平均减少86.5%、57.5%,算法运行时间分别平均减少87%、60.75%,欧氏距离误差平方和分别平均减少70%、22%;在具有平均57000个采样点的自采集点云数据库上与传统ICP算法和改进ICP算法相比,迭代次数分别平均减少94%、75%,算法运行时间分别平均减少92%、69%,欧氏距离误差平方和分别平均减少61.5%、30.6%;实验结果显示使用该方法进行点云配准效率较高且配准误差更小;和KinectFusion算法相比在纹理细节保留上也表现出较好的效果。结论 本文提出的基于旋转平台标定的点云配准算法,利用二分迭代算法能够有效降低算法复杂度。与典型ICP和改进的ICP算法的对比实验也表明了本文算法的有效性。另外,与其他方法在具有纹理的点云配准对比实验中也验证了本文配准方法的优越性。该方法仅采用单个Kinect即可实现对非匀速非固定角度旋转物体的3维建模,方便实用,适用于简单快速的3维重建应用场合。  相似文献   

10.
点云拼接在三维物体重建中有着广泛的应用,由于扫描设备会受到光照、遮挡或物体尺寸等的影响,使得扫描设备不能在同一视角下获取待测物体的全部点云信息. 针对迭代最近点算法(ICP)受点云初始位姿影响较大,鲁棒性差的特点,提出一种将多视点云数据作为研究对象,基于改进ICP算法的点云拼接算法. 该算法在选取特征点时,将坐标轴与阈值相结合,设定一个阈值约束候选点的搜索范围,然后得到欧氏距离最近的点集,并使用ICP算法进行点云拼接. 实验结果表明使用本文算法较传统ICP算法在迭代耗时、拼接精度上有明显的优势.  相似文献   

11.
This paper presents a new robust image-based modeling system for creating high-quality 3D models of complex objects from a sequence of unconstrained photographs. The images can be acquired by a video camera or hand-held digital camera without the need of camera calibration. In contrast to previous methods, we integrate correspondence-based and silhouette-based approaches, which significantly enhances the reconstruction of objects with few visual features (e.g., uni-colored objects) and improves surface smoothness. Our solution uses a mesh segmentation and charting approach in order to create a low-distortion mesh parameterization suitable for objects of arbitrary genus. A high-quality texture is produced by first parameterizing the reconstructed objects using a segmentation and charting approach, projecting suitable sections of input images onto the model, and combining them using a graph-cut technique. Holes in the texture due to surface patches without projecting input images are filled using a novel exemplar-based inpainting method which exploits appearance space attributes to improve patch search, and blends patches using Poisson-guided interpolation. We analyzed the effect of different algorithm parameters, and compared our system with a laser scanning-based reconstruction and existing commercial systems. Our results indicate that our system is robust, superior to other image-based modeling techniques, and can achieve a reconstruction quality visually not discernible from that of a laser scanner.  相似文献   

12.
We propose a technique for 3-D modeling of objects in remote dynamic situations employing mobile stereocameras. Since the proposed technique allows independent movement of the cameras employed, 3-D modeling under various environments, such as remote places, can be realized. Our technique has an advantage over others in that camera calibration is not prerequisite to the 3-D modeling before taking images. A 3-D modeling system is described that captures images of remote objects, transfers the images by analog airwaves, and recovers the 3-D shape of the object from the images. It is expected that this technique will improve the efficiency of image information transfer. In the experiment performed, a human walking in a remote place was modeled successfully in 3-D in a laboratory by image transfer. This work was presented in part at the 11th International Symposium on Artificial Life and Robotics, Oita, Japan, January 23–25, 2006  相似文献   

13.
In this paper, we present a method to model hyperelasticity that is well suited for representing the nonlinearity of real‐world objects, as well as for estimating it from deformation examples. Previous approaches suffer several limitations, such as lack of integrability of elastic forces, failure to enforce energy convexity, lack of robustness of parameter estimation, or difficulty to model cross‐modal effects. Our method avoids these problems by relying on a general energy‐based definition of elastic properties. The accuracy of the resulting elastic model is maximized by defining an additive model of separable energy terms, which allow progressive parameter estimation. In addition, our method supports efficient modeling of extreme nonlinearities thanks to energy‐limiting constraints. We combine our energy‐based model with an optimization method to estimate model parameters from force‐deformation examples, and we show successful modeling of diverse deformable objects, including cloth, human finger skin, and internal human anatomy in a medical imaging application.  相似文献   

14.
This paper presents an algorithm to model volumetric data and other one for non-rigid registration of such models using spheres formulated in the geometric algebra framework. The proposed algorithm for modeling, as opposite to the Union of Spheres method, reduces the number of entities (spheres) used to model 3D data. Our proposal is based in marching cubes idea using, however, spheres, while the Union of Spheres uses Delaunay tetrahedrization. The non-rigid registration is accomplished in a deterministic annealing scheme. At the preprocessing stage we segment the objects of interest by a segmentation method based on texture information. This method is embedded in a region growing scheme. As our final application, we present a scheme for surgical object tracking using again geometric algebra techniques.  相似文献   

15.
Two recent advances—the use of functionally gradient materials in parts and layered manufacturing technology—have brought to the forefront the need for design and fabrication methodologies for heterogeneous objects. However, current solid modeling systems, a core component of computer-aided design and fabrication tools, are typically purely geometry based, and only after the modeling of product geometry, can a part's non-geometric attributes such as material composition be modeled. This sequential order of modeling leads to unnecessary operations and over-segmented 3D regions during heterogeneous object modeling processes.

To enable an efficient design of heterogeneous objects, we propose a novel method, direct face neighborhood operation. This approach combines the geometry and material decisions into a common computational framework as opposed to separate and sequential operations in existing modeling systems. We present theories and algorithms for direction face neighborhood alteration, which enables direct alteration of face neighborhood before 3D regions are formed. This alteration is based on set membership classification (SMC) and region material semantics. The SMC is computationally enhanced by the usage of topological characteristics of heterogeneous objects. After the SMC, boundary evaluation is performed according to the altered face neighborhood. In comparison with other solid modeling methods, the direct face neighborhood alteration method is computationally effective, allows direct B-Rep operations, and is efficient for persistent region naming. A prototype system has been implemented to validate the method and some examples are presented.  相似文献   


16.
In this paper, we discuss the issue of camera parameter estimation (intrinsic and extrinsic parameters), along with estimation of the geo-location of the camera by using only the shadow trajectories. By observing stationary objects over a period of time, it is shown that only six points on the trajectories formed by tracking the shadows of the objects are sufficient to estimate the horizon line of the ground plane. This line is used along with the extracted vertical vanishing point to calibrate the stationary camera. The method requires as few as two shadow casting objects in the scene and a set of six or more points on the shadow trajectories of these objects. Once camera intrinsic parameters are recovered, we present a novel application where one can accurately determine the geo-location of the camera up to a longitude ambiguity using only three points from these shadow trajectories without using any GPS or other special instruments. We consider possible cases where this ambiguity can also be removed if additional information is available. Our method does not require any knowledge of the date or the time when the images are taken, and recovers the date of acquisition directly from the images. We demonstrate the accuracy of our technique for both steps of calibration and geo-temporal localization using synthetic and real data.  相似文献   

17.
《Graphical Models》2001,63(1):1-20
Superquadrics are a family of parametric shapes which can model a diverse set of objects. They have received significant attention because of their compact representation and robust methods for recovery of 3D models. However, their assumption of intrinsical symmetry fails in modeling numerous real-world examples such as the human body, animals, and other naturally occurring objects. In this paper, we present a novel approach, which is called extended superquadric, to extend superquadric's representation power with exponent functions. An extended superquadric model can be deformed in any direction because it extends the exponents of superquadrics from constants to functions of the latitude and longitude angles in the spherical coordinate system. Thus, extended superquadrics can model more complex shapes than superquadrics. It also maintains many desired properties of superquadrics such as compactness, controllability, and intuitive meaning, which are all advantageous for shape modeling, recognition, and reconstruction. In this paper, besides the use of extended superquadrics for modeling, we also discuss the recovery of extended superquadrics from 3D information (reconstruction). Experiments on both realistic modeling and extended superquadric fitting are presented. Our results are very encouraging and indicate that the use of extended superquadric has potential benefits for the generation of synthetic images for computer graphics and that extended superquadric also is a promising paradigm for shape representation and recovery in computer vision.  相似文献   

18.
This project aims to develop a three-dimensional (3D) model reconstruction system using images acquired from a mobile camera. It consists of four major steps: camera calibration, volumetric model reconstruction, surface modeling and texture mapping. A novel online scale factor estimation is developed to enhance the accuracy of the coplanar camera calibration. For the volumetric modeling, the voting-based shape-from-silhouette first generates a coarse model, which is then refined by the photo-consistency check using the novel 3D voxel mask. Our scheme can handle concave surface in a sophisticated way. Finally, the surface model is formed with the original images mapped. 3D models of some test objects are presented.  相似文献   

19.
Motivated by the need for correct and robust 3D models of neuronal processes, we present a method for reconstruction of spatially realistic and topologically correct models from planar cross sections of multiple objects. Previous work in 3D reconstruction from serial contours has focused on reconstructing one object at a time, potentially producing inter-object intersections between slices. We have developed a robust algorithm that removes these intersections using a geometric approach. Our method not only removes intersections but can guarantee a given minimum separation distance between objects. This paper describes the algorithm for geometric adjustment, proves correctness, and presents several results of our high-fidelity modeling.  相似文献   

20.
Putting Objects in Perspective   总被引:2,自引:0,他引:2  
Image understanding requires not only individually estimating elements of the visual world but also capturing the interplay among them. In this paper, we provide a framework for placing local object detection in the context of the overall 3D scene by modeling the interdependence of objects, surface orientations, and camera viewpoint. Most object detection methods consider all scales and locations in the image as equally likely. We show that with probabilistic estimates of 3D geometry, both in terms of surfaces and world coordinates, we can put objects into perspective and model the scale and location variance in the image. Our approach reflects the cyclical nature of the problem by allowing probabilistic object hypotheses to refine geometry and vice-versa. Our framework allows painless substitution of almost any object detector and is easily extended to include other aspects of image understanding. Our results confirm the benefits of our integrated approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号