共查询到20条相似文献,搜索用时 15 毫秒
1.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc. 相似文献
2.
Automatic high-resolution optoelectronic photogrammetric 3D surface geometry acquisition system 总被引:1,自引:0,他引:1
A fast, high-resolution, automatic, non-contact 3D surface geometry measuring system using a photogrammetric optoelectronic
technique based on lateral-photoeffect diode detectors has been developed. Designed for the acquisition of surface geometries
such as machined surfaces, biological surfaces, and deformed parts, the system can be used in design, manufacturing, inspection,
and range finding. A laser beam is focused and scanned onto the surface of the object to be measured. Two cameras in stereo
positions capture the reflected light from the surface at 10 kHz. Photogrammetric triangulation quickly transforms the pair
of 2D signals created by the camera detectors into 3D coordinates of the light spot. Because only one small spot on the object
is illuminated at a time, the stereo correspondence problem is solved in real time. The resolution is determined by a 12-bit
A/D converter and can be improved up to 25 60025 600 by oversampling. The irregular 3D data can be regularized for use with image-based algorithms.
Received: 8 October 1996 / Accepted: 3 February 1997 相似文献
3.
In this paper, we address the analysis of 3D shape and shape change in non-rigid biological objects imaged via a stereo light
microscope. We propose an integrated approach for the reconstruction of 3D structure and the motion analysis for images in
which only a few informative features are available. The key components of this framework are: 1) image registration using
a correlation-based approach, 2) region-of-interest extraction using motion-based segmentation, and 3) stereo and motion analysis
using a cooperative spatial and temporal matching process. We describe these three stages of processing and illustrate the
efficacy of the proposed approach using real images of a live frog's ventricle. The reconstructed dynamic 3D structure of
the ventricle is demonstrated in our experimental results, and it agrees qualitatively with the observed images of the ventricle. 相似文献
4.
5.
A model-based approach to reconstruction of 3D human arm motion from a monocular image sequence taken under orthographic
projection is presented. The reconstruction is divided into two stages. First, a 2D shape model is used to track the arm silhouettes
and second-order curves are used to model the arm based on an iteratively reweighted least square method. As a result, 2D
stick figures are extracted. In the second stage, the stick figures are backprojected into the scene. 3D postures are reconstructed
using the constraints of a 3D kinematic model of the human arm. The motion of the arm is then derived as a transition between
the arm postures. Applications of these results are foreseen in the analysis of human motion patterns.
Received: 26 January 1996 / Accepted: 17 July 1997 相似文献
6.
Guillaume Caron El Mustapha Mouaddib Eric Marchand 《Robotics and Autonomous Systems》2012,60(8):1056-1068
The current work addresses the problem of 3D model tracking in the context of monocular and stereo omnidirectional vision in order to estimate the camera pose. To this end, we track 3D objects modeled by line segments because the straight line feature is often used to model the environment. Indeed, we are interested in mobile robot navigation using omnidirectional vision in structured environments. In the case of omnidirectional vision, 3D straight lines are projected as conics in omnidirectional images. Under certain conditions, these conics may have singularities.In this paper, we present two contributions. We, first, propose a new spherical formulation of the pose estimation withdrawing singularities, using an object model composed of lines. The theoretical formulation and the validation on synthetic images thus show that the new formulation clearly outperforms the former image plane one. The second contribution is the extension of the spherical representation to the stereovision case. We consider in the paper a sensor which combines a camera and four mirrors. Results in various situations show the robustness to illumination changes and local mistracking. As a final result, the proposed new stereo spherical formulation allows us to localize online a robot indoor and outdoor whereas the classical formulation fails. 相似文献
7.
Xue-Nan Cui Young-Geun Kim Hakil Kim 《International Journal of Control, Automation and Systems》2009,7(5):788-798
This paper proposes a method of detecting movable paths during visual navigation for a robot operating in an unknown structured
environment. The proposed approach detects and segments the floor by computing plane normals from motion fields in image sequences.
A floor is a useful object for mobile robots in structured environments, because it presents traversable paths if existing
static or dynamic objects are removed effectively. In spite of this advantage, it cannot be easily detected from 2D image.
In this paper, some geometric features observed in the scene and assumptions about images are exploited so that a plane normal
can be employed as an effective clue to separate the floor from the scene. In order to use the plane normal, two methods are
proposed and integrated with a designed iterative refinement process. Then, the floor can be accurately detected even when
mismatched point correspondences are obtained. The results of preliminary experiments on real data demonstrate the effectiveness
of the proposed methods. 相似文献
8.
Wearable Visual Robots 总被引:3,自引:0,他引:3
Research work reported in the literature in wearable visual computing has used exclusively static (or non-active) cameras,
making the imagery and image measurements dependent on the wearer’s posture and motions. It is assumed that the camera is
pointing in a good direction to view relevant parts of the scene at best by virtue of being mounted on the wearer’s head,
or at worst wholly by chance. Even when pointing in roughly the correct direction, any visual processing relying on feature
correspondence from a passive camera is made more difficult by the large, uncontrolled inter-image movements which occur when
the wearer moves, or even breathes. This paper presents a wearable active visual sensor which is able to achieve a level of
decoupling of camera movement from the wearer’s posture and motions by a combination of inertial and visual sensor feedback
and active control. The issues of sensor placement, robot kinematics and their relation to wearability are discussed. The
performance of the prototype robot is evaluated for some essential visual tasks. The paper also discusses potential applications
for this kind of wearable robot. 相似文献
9.
Henrik I. Christensen Niels O. Kirkeby Steen Kristensen Lars Knudsen Erik Granum 《Robotics and Autonomous Systems》1994,12(3-4):199-207
For navigation in a partially known environment it is possible to provide a model that may be used for guidance in the navigation and as a basis for selective sensing. In this paper a navigation system for an autonomous mobile robot is presented. Both navigation and sensing is built around a graphics model, which enables prediction of the expected scene content. The model is used directly for prediction of line segments which, through matching, allow estimation of position and orientation. In addition, the model is used as a basis for a hierarchical stereo matching that enables dynamic updating of the model with unmodelled objects in the environment. For short-term path planning a set of reactive behaviours is used. The reactive behaviours include use of inverse perspective mapping for generation of occupancy grids, a sonar system and simple gaze holding for monitoring of dynamic obstacles. The full system and its component processes are described and initial experiments with the system are briefly outlined. 相似文献
10.
In this paper, we discuss an appearance-matching approach to the difficult problem of interpreting color scenes containing
occluded objects. We have explored the use of an iterative, coarse-to-fine sum-squared-error method that uses information
from hypothesized occlusion events to perform run-time modification of scene-to-template similarity measures. These adjustments
are performed by using a binary mask to adaptively exclude regions of the template image from the squared-error computation.
At each iteration higher resolution scene data as well as information derived from the occluding interactions between multiple
object hypotheses are used to adjust these masks. We present results which demonstrate that such a technique is reasonably
robust over a large database of color test scenes containing objects at a variety of scales, and tolerates minor 3D object
rotations and global illumination variations.
Received: 21 November 1996 / Accepted: 14 October 1997 相似文献
11.
以投影几何学以及双目立体视觉原理为理论基础,对移动机器人的三维重建技术进行研究,对移动机器人漫道过程中所在的兴趣区域的场景进行较为精确的建模.设计了机器人的快速建模方法,利用迭代最近点算法(ICP),完成了多个局部三维场景模型的融合.同时,结合栅格投射理论,完成了对全局三维场景模型的更新.利用栅格模型重建的三维场景,具有环境信息丰富,模型描述精确的特点,可以应用于移动机器人导航领域. 相似文献
12.
Samia Boukir Patrick Bouthemy François Chaumette Didier Juvin 《Machine Vision and Applications》1998,10(5-6):321-330
This paper presents a local approach for matching contour segments in an image sequence. This study has been primarily motivated
by work concerned with the recovery of 3D structure using active vision. The method to recover the 3D structure of the scene
requires to track in real-time contour segments in an image sequence. Here, we propose an original and robust approach that
is ideally suited for this problem. It is also of more general interest and can be used in any context requiring matching
of line boundaries over time. This method only involves local modeling and computation of moving edges dealing “virtually”
with a contour segment primitive representation. Such an approach brings robustness to contour segmentation instability and
to occlusion, and easiness for implementation. Parallelism has also been investigated using an SIMD-based real-time image-processing
system. This method has been validated with experiments on several real-image sequences. Our results show quite satisfactory
performance and the algorithm runs in a few milliseconds.
Received: 11 December 1996 / Accepted: 8 August 1997 相似文献
13.
Yi-Ping Hung Chu-Song Chen Kuan-Chung Hung Yong-Sheng Chen Chiou-Shann Fuh 《Machine Vision and Applications》1998,10(5-6):280-291
This paper presents a new multi-pass hierarchical stereo-matching approach for generation of digital terrain models (DTMs)
from two overlapping aerial images. Our method consists of multiple passes which compute stereo matches with a coarse-to-fine
and sparse-to-dense paradigm. An image pyramid is generated and used in the hierarchical stereo matching. Within each pass,
the DTM is refined by using the image pyramid from the coarse to the fine level. At the coarsest level of the first pass,
a global stereo-matching technique, the intra-/inter-scanline matching method, is used to generate a good initial DTM for
the subsequent stereo matching. Thereafter, hierarchical block matching is applied to image locations where features are detected
to refine the DTM incrementally. In the first pass, only the feature points near salient edge segments are considered in block
matching. In the second pass, all the feature points are considered, and the DTM obtained from the first pass is used as the
initial condition for local searching. For the passes after the second pass, 3D interactive manual editing can be incorporated
into the automatic DTM refinement process whenever necessary. Experimental results have shown that our method can successfully
provide accurate DTM from aerial images. The success of our approach and system has also been demonstrated with a flight simulation
software.
Received: 4 November 1996 / Accepted: 20 October 1997 相似文献
14.
A system to navigate a robot into a ship structure 总被引:1,自引:0,他引:1
Markus Vincze Minu Ayromlou Carlos Beltran Antonios Gasteratos Simon Hoffgaard Ole Madsen Wolfgang Ponweiser Michael Zillich 《Machine Vision and Applications》2003,14(1):15-25
Abstract. A prototype system has been built to navigate a walking robot into a ship structure. The 8-legged robot is equipped with
an active stereo head. From the CAD-model of the ship good view points are selected, such that the head can look at locations
with sufficient edge features, which are extracted automatically for each view. The pose of the robot is estimated from the
features detected by two vision approaches. One approach searches in stereo images for junctions and measures the 3-D position.
The other method uses monocular image and tracks 2-D edge features. Robust tracking is achieved with a method of edge projected
integration of cues (EPIC). Two inclinometres are used to stabilise the head while the robot moves. The results of the final
demonstration to navigate the robot within centimetre accuracy are given. 相似文献
15.
Using vanishing points for camera calibration and coarse 3D reconstruction from a single image 总被引:5,自引:0,他引:5
In this paper, we show how to calibrate a camera and to recover the geometry and the photometry (textures) of objects from
a single image. The aim of this work is to make it possible walkthrough and augment reality in a 3D model reconstructed from
a single image. The calibration step does not need any calibration target and makes only four assumptions: (1) the single
image contains at least two vanishing points, (2) the length (in 3D space) of one line segment (for determining the translation
vector) in the image is known, (3) the principle point is the center of the image, and (4) the aspect ratio is fixed by the
user. Each vanishing point is determined from a set of parallel lines. These vanishing points help determine a 3D world coordinate
system R
o. After having computed the focal length, the rotation matrix and the translation vector are evaluated in turn for describing
the rigid motion between R
o and the camera coordinate system R
c. Next, the reconstruction step consists in placing, rotating, scaling, and translating a rectangular 3D box that must fit
at best with the potential objects within the scene as seen through the single image. With each face of a rectangular box,
a texture that may contain holes due to invisible parts of certain objects is assigned. We show how the textures are extracted
and how these holes are located and filled. Our method has been applied to various real images (pictures scanned from books,
photographs) and synthetic images. 相似文献
16.
In this paper, we present a method called MODEEP (Motion-based Object DEtection and Estimation of Pose) to detect independently
moving objects (IMOs) in forward-looking infrared (FLIR) image sequences taken from an airborne, moving platform. Ego-motion
effects are removed through a robust multi-scale affine image registration process. Thereafter, areas with residual motion
indicate potential object activity. These areas are detected, refined and selected using a Bayesian classifier. The resulting
regions are clustered into pairs such that each pair represents one object's front and rear end. Using motion and scene knowledge,
we estimate object pose and establish a region of interest (ROI) for each pair. Edge elements within each ROI are used to
segment the convex cover containing the IMO. We show detailed results on real, complex, cluttered and noisy sequences. Moreover,
we outline the integration of our fast and robust system into a comprehensive automatic target recognition (ATR) and action
classification system. 相似文献
17.
This paper describes a laser-based computer vision system used for automatic fruit recognition. It is based on an infrared
laser range-finder sensor that provides range and reflectance images and is designed to detect spherical objects in non-structured
environments. Image analysis algorithms integrate both range and reflectance information to generate four characteristic primitives
which give evidence of the existence of spherical objects. The output of this vision system includes 3D position, radius and
surface reflectivity of each spherical object. It has been applied to the AGRIBOT orange harvesting robot, where it has obtained
good fruit detection rates and unlikely false detections. 相似文献
18.
Thad Starner Bastian Leibe David Minnen Tracy Westyn Amy Hurst Justin Weeks 《Machine Vision and Applications》2003,14(1):59-71
Abstract. The Perceptive Workbench endeavors to create a spontaneous and unimpeded interface between the physical and virtual worlds.
Its vision-based methods for interaction constitute an alternative to wired input devices and tethered tracking. Objects are
recognized and tracked when placed on the display surface. By using multiple infrared light sources, the object's 3-D shape
can be captured and inserted into the virtual interface. This ability permits spontaneity, since either preloaded objects
or those objects selected at run-time by the user can become physical icons. Integrated into the same vision-based interface
is the ability to identify 3-D hand position, pointing direction, and sweeping arm gestures. Such gestures can enhance selection,
manipulation, and navigation tasks. The Perceptive Workbench has been used for a variety of applications, including augmented
reality gaming and terrain navigation. This paper focuses on the techniques used in implementing the Perceptive Workbench
and the system's performance. 相似文献
19.
In order to get useful information from various kinds of information sources, we first apply a searching process with query
statements to retrieve candidate data objects (called a hunting process in this paper) and then apply a browsing process to
check the properties of each object in detail by visualizing candidates. In traditional information retrieval systems, the
hunting process determines the quality of the result, since there are only a few candidates left for the browsing process.
In order to retrieve data from widely distributed digital libraries, the browsing process becomes very important, since the
properties of data sources are not known in advance. After getting data from various information sources, a user checks the
properties of data in detail using the browsing process. The result can be used to improve the hunting process or for selecting
more appropriate visualization parameters. Visualization relationships among data are very important, but will become too
time-consuming if the amount of data in the candidate set is large, for example, over one hundred objects. One of the important
problems in handling information retrieval from a digital library is to create efficient and powerful visualization mechanisms
for the browsing process. One promising way to solve the visualization problem is to map each candidate data object into a
location in three-dimensional (3D) space using a proper distance definition. In this paper, we will introduce the functions
and organization of a system having a browsing navigator to achieve an efficient browsing process in 3D information search
space. This browsing navigator has the following major functions: ?1. Selection of features which determine the distance for
visualization, in order to generate a uniform distribution of candidate data objects in the resulting space. ?2. Calculation
of the location of the data objects in 2D space using the selected features. ?3. Construction of 3D browsing space by combining
2D spaces, in order to find the required data objects easily. ?4. Generation of the oblique views of 3D browsing space and
data objects by reducing the overlap of data objects in order to make navigation easy for the user in 3D space. ?Examples
of this browsing navigator applied to book data are shown.
Received: 15 December 1997 / Revised: June 1999 相似文献
20.
Vision-based 3-D trajectory tracking for unknown environments 总被引:1,自引:0,他引:1
This paper describes a vision-based system for 3-D localization of a mobile robot in a natural environment. The system includes a mountable head with three on-board charge-coupled device cameras that can be installed on the robot. The main emphasis of this paper is on the ability to estimate the motion of the robot independently from any prior scene knowledge, landmark, or extra sensory devices. Distinctive scene features are identified using a novel algorithm, and their 3-D locations are estimated with high accuracy by a stereo algorithm. Using new two-stage feature tracking and iterative motion estimation in a symbiotic manner, precise motion vectors are obtained. The 3-D positions of scene features and the robot are refined by a Kalman filtering approach with a complete error-propagation modeling scheme. Experimental results show that good tracking and localization can be achieved using the proposed vision system. 相似文献