首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   27篇
  免费   0篇
工业技术   27篇
  2022年   1篇
  2019年   1篇
  2016年   3篇
  2015年   1篇
  2013年   1篇
  2012年   5篇
  2011年   1篇
  2010年   3篇
  2008年   4篇
  2006年   1篇
  2004年   1篇
  2003年   2篇
  2002年   1篇
  1999年   2篇
排序方式: 共有27条查询结果,搜索用时 0 毫秒
1.
Recovering articulated shape and motion, especially human body motion, from video is a challenging problem with a wide range of applications in medical study, sport analysis and animation, etc. Previous work on articulated motion recovery generally requires prior knowledge of the kinematic chain and usually does not concern the recovery of the articulated shape. The non-rigidity of some articulated part, e.g. human body motion with nonrigid facial motion, is completely ignored. We propose a factorization-based approach to recover the shape, motion and kinematic chain of an articulated object with nonrigid parts altogether directly from video sequences under a unified framework. The proposed approach is based on our modeling of the articulated non-rigid motion as a set of intersecting motion subspaces. A motion subspace is the linear subspace of the trajectories of an object. It can model a rigid or non-rigid motion. The intersection of two motion subspaces of linked parts models the motion of an articulated joint or axis. Our approach consists of algorithms for motion segmentation, kinematic chain building, and shape recovery. It handles outliers and can be automated. We test our approach through synthetic and real experiments and demonstrate how to recover articulated structure with non-rigid parts via a single-view camera without prior knowledge of its kinematic chain.  相似文献   
2.
Detailed Real-Time Urban 3D Reconstruction from Video   总被引:2,自引:0,他引:2  
The paper presents a system for automatic, geo-registered, real-time 3D reconstruction from video of urban scenes. The system collects video streams, as well as GPS and inertia measurements in order to place the reconstructed models in geo-registered coordinates. It is designed using current state of the art real-time modules for all processing steps. It employs commodity graphics hardware and standard CPU’s to achieve real-time performance. We present the main considerations in designing the system and the steps of the processing pipeline. Our system extends existing algorithms to meet the robustness and variability necessary to operate out of the lab. To account for the large dynamic range of outdoor videos the processing pipeline estimates global camera gain changes in the feature tracking stage and efficiently compensates for these in stereo estimation without impacting the real-time performance. The required accuracy for many applications is achieved with a two-step stereo reconstruction process exploiting the redundancy across frames. We show results on real video sequences comprising hundreds of thousands of frames.  相似文献   
3.
Feature tracking and matching in video using programmable graphics hardware   总被引:2,自引:0,他引:2  
This paper describes novel implementations of the KLT feature tracking and SIFT feature extraction algorithms that run on the graphics processing unit (GPU) and is suitable for video analysis in real-time vision systems. While significant acceleration over standard CPU implementations is obtained by exploiting parallelism provided by modern programmable graphics hardware, the CPU is freed up to run other computations in parallel. Our GPU-based KLT implementation tracks about a thousand features in real-time at 30 Hz on 1,024 × 768 resolution video which is a 20 times improvement over the CPU. The GPU-based SIFT implementation extracts about 800 features from 640 × 480 video at 10 Hz which is approximately 10 times faster than an optimized CPU implementation.  相似文献   
4.
We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP problem.  相似文献   
5.
International Journal of Computer Vision - This work presents and evaluates a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach...  相似文献   
6.
In this paper, we present an algorithm to probabilistically estimate object shapes in a 3D dynamic scene using their silhouette information derived from multiple geometrically calibrated video camcorders. The scene is represented by a 3D volume. Every object in the scene is associated with a distinctive label to represent its existence at every voxel location. The label links together automatically-learned view-specific appearance models of the respective object, so as to avoid the photometric calibration of the cameras. Generative probabilistic sensor models can be derived by analyzing the dependencies between the sensor observations and object labels. Bayesian reasoning is then applied to achieve robust reconstruction against real-world environment challenges, such as lighting variations, changing background etc. Our main contribution is to explicitly model the visual occlusion process and show: (1) static objects (such as trees or lamp posts), as parts of the pre-learned background model, can be automatically recovered as a byproduct of the inference; (2) ambiguities due to inter-occlusion between multiple dynamic objects can be alleviated, and the final reconstruction quality is drastically improved. Several indoor and outdoor real-world datasets are evaluated to verify our framework.  相似文献   
7.
8.
9.
Stratified self-calibration with the modulus constraint   总被引:10,自引:0,他引:10  
In computer vision and especially for 3D reconstruction, one of the key issues is the retrieval of the calibration parameters of the camera. These are needed to obtain metric information about the scene from the camera. Often these parameters are obtained through cumbersome calibration procedures. There is a way to avoid explicit calibration of the camera. Self-calibration is based on finding the set of calibration parameters which satisfy some constraints (e.g., constant calibration parameters). Several techniques have been proposed but it often proved difficult to reach a metric calibration at once. Therefore, in the paper, a stratified approach is proposed, which goes from projective through affine to metric. The key concept to achieve this is the modulus constraint. It allows retrieval of the affine calibration for constant intrinsic parameters. It is also suited for use in conjunction with scene knowledge. In addition, if the affine calibration is known, it can also be used to cope with a changing focal length  相似文献   
10.
A stereo-vision system for support of planetary surface exploration   总被引:2,自引:0,他引:2  
Abstract. In this paper, we present a system that was developed for the European Space Agency (ESA) for the support of planetary exploration. The system that is sent to the planetary surface consists of a rover and a lander. The lander contains a stereo head equipped with a pan-tilt mechanism. This vision system is used both for modeling the terrain and for localization of the rover. Both tasks are necessary for the navigation of the rover. Due to the stress that occurs during the flight, a recalibration of the stereo-vision system is required once it is deployed on the planet. Practical limitations make it unfeasible to use a known calibration pattern for this purpose; therefore, a new calibration procedure had to be developed that could work on images of the planetary environment. This automatic procedure recovers the relative orientation of the cameras and the pan and tilt axes, as well as the exterior orientation for all the images. The same images are subsequently used to reconstruct the 3-D structure of the terrain. For this purpose, a dense stereo-matching algorithm is used that (after rectification) computes a disparity map. Finally, all the disparity maps are merged into a single digital terrain model. In this paper, a simple and elegant procedure is proposed that achieves that goal. The fact that the same images can be used for both calibration and 3-D reconstruction is important, since, in general, the communication bandwidth is very limited. In addition to navigation and path planning, the 3-D model of the terrain is also used for virtual-reality simulations of the mission, wherein the model is texture mapped with the original images. The system has been implemented, and the first tests on the ESA planetary terrain testbed were successful.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号