首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a fast image sequence-based navigation approach for a flat route represented in sparse waypoints. Instead of purely optimizing the length of the path, this paper aims to speed up the navigation by lengthening the distance between consecutive waypoints. When local visual homing in a variable velocity is applied for robot navigation between two waypoints, the robot's speed changes according to the distance between waypoints. Because long distance implies large scale difference between the robot's view and the waypoint image, log-polar transform is introduced to find a correspondence between images and infer a less accurate motion vector. In order to maintain the navigation accuracy, our prior work on local visual homing with SIFT feature matching is adopted when the robot is relatively close to the waypoint. Experiments support the proposed navigation approach in a multiple-waypoint route. Compared to other prior work on visual homing with SIFT feature matching, the proposed navigation approach requires fewer waypoints and the navigation speed is improved without compromising the accuracy in navigation.  相似文献   

2.

The robustness of a visual servoing task depends mainly on the efficiency of visual selections captured from a sensor at each robot’s position. A task function could be described as a regulation of the values sent via the control law to the camera velocities. In this paper we propose a new approach that does not depend on matching and tracking results. Thus, we replaced the classical minimization cost by a new function based on probability distributions and Bhattacharyya distance. To guarantee more robustness, the information related to the observed images was expressed using a combination of orientation selections. The new visual selections are computed by referring to the disposition of Histograms of Oriented Gradients (HOG) bins. For each bin we assign a random variable representing gradient vectors in a particular direction. The new entries will not be used to establish equations of visual motion but they will be directly inserted into the control loop. A new formulation of the interaction matrix has been presented according to the optical flow constraint and using an interpolation function which leads to a more efficient control behaviour and to more positioning accuracy. Experiments demonstrate the robustness of the proposed approach with respect to varying work space conditions.

  相似文献   

3.
We present a fast and efficient homing algorithm based on Fourier transformed panoramic images. By continuously comparing Fourier coefficients calculated from the current view with coefficients representing the goal location, a mobile robot is able to find its way back to known locations. No prior knowledge about the orientation with respect to the goal location is required, since the Fourier phase is used for a fast sub-pixel orientation estimation. We present homing runs performed by an autonomous mobile robot in an office environment. In a more comprehensive investigation the algorithm is tested on an image data base recorded by a small mobile robot in a toy house arena. Catchment areas for the proposed algorithm are calculated and compared to results of a homing scheme described in [M. Franz, B. Schölkopf, H. Mallot, H. Bülthoff, Where did I take that snapshot? Scene based homing by image matching, Biological Cybernetics 79 (1998) 191–202] and a simple homing strategy using neighbouring views. The results show that a small number of coefficients is sufficient to achieve a good homing performance. Also, a coarse-to-fine homing strategy is proposed in order to achieve both a large catchment area and a high homing accuracy: the number of Fourier coefficients used is increased during the homing run.  相似文献   

4.
Many insects and animals exploit their own navigation systems to navigate in space. Biologically-inspired methods have been introduced for landmark-based navigation algorithms of a mobile robot. The methods determine the movement direction based on a home snapshot image and another snapshot from the current position. In this paper, we suggest a new landmark-based matching method for robotic homing navigation that first computes the distance to each landmark based on ego-motion and estimates the landmark arrangement in the snapshot image. Then, landmark vectors are used to localize the robotic agent in the environment and to choose the appropriate direction to return home. As a result, this method has a higher success rate for returning home from an arbitrary position than do the conventional image-matching algorithms.  相似文献   

5.
The control of a robot system using camera information is a challenging task regarding unpredictable conditions, such as feature point mismatch and changing scene illumination. This paper presents a solution for the visual control of a nonholonomic mobile robot in demanding real world circumstances based on machine learning techniques. A novel intelligent approach for mobile robots using neural networks (NNs), learning from demonstration (LfD) framework, and epipolar geometry between two views is proposed and evaluated in a series of experiments. A direct mapping from the image space to the actuator command is conducted using two phases. In an offline phase, NN–LfD approach is employed in order to relate the feature position in the image plane with the angular velocity for lateral motion correction. An online phase refers to a switching vision based scheme between the epipole based linear velocity controller and NN–LfD based angular velocity controller, which selection depends on the feature distance from the pre-defined interest area in the image. In total, 18 architectures and 6 learning algorithms are tested in order to find optimal solution for robot control. The best training outcomes for each learning algorithms are then employed in real time so as to discover optimal NN configuration for robot orientation correction. Experiments conducted on a nonholonomic mobile robot in a structured indoor environment confirm an excellent performance with respect to the system robustness and positioning accuracy in the desired location.  相似文献   

6.
We introduce a novel method for visual homing. Using this method a robot can be sent to desired positions and orientations in 3D space specified by single images taken from these positions. Our method is based on recovering the epipolar geometry relating the current image taken by the robot and the target image. Using the epipolar geometry, most of the parameters which specify the differences in position and orientation of the camera between the two images are recovered. However, since not all of the parameters can be recovered from two images, we have developed specific methods to bypass these missing parameters and resolve the ambiguities that exist. We present two homing algorithms for two standard projection models, weak and full perspective.Our method determines the path of the robot on-line, the starting position of the robot is relatively not constrained, and a 3D model of the environment is not required. The method is almost entirely memoryless, in the sense that at every step the path to the target position is determined independently of the previous path taken by the robot. Because of this property the robot may be able, while moving toward the target, to perform auxiliary tasks or to avoid obstacles, without this impairing its ability to eventually reach the target position. We have performed simulations and real experiments which demonstrate the robustness of the method and that the algorithms always converge to the target pose.  相似文献   

7.
The paper describes a visual method for the navigation of autonomous floor-cleaning robots. The method constructs a topological map with metrical information where place nodes are characterized by panoramic images and by particle clouds representing position estimates. Current image and position estimate of the robot are interrelated to landmark images and position estimates stored in the map nodes through a holistic visual homing method which provides bearing and orientation estimates. Based on these estimates, a position estimate of the robot is updated by a particle filter. The robot’s position estimates are used to guide the robot along parallel, meandering lanes and are also assigned to newly created map nodes which later serve as landmarks. Computer simulations and robot experiments confirm that the robot position estimate obtained by this method is sufficiently accurate to keep the robot on parallel lanes, even in the presence of large random and systematic odometry errors. This ensures an efficient cleaning behavior with almost complete coverage of a rectangular area and only small repeated coverage. Furthermore, the topological-metrical map can be used to completely cover rooms or apartments by multiple meander parts.  相似文献   

8.
A Set Theoretic Approach to Dynamic Robot Localization and Mapping   总被引:2,自引:1,他引:1  
This paper addresses the localization and mapping problem for a robot moving through a (possibly) unknown environment where indistinguishable landmarks can be detected. A set theoretic approach to the problem is presented. Computationally efficient algorithms for measurement-to-feature matching, estimation of landmark positions, estimation of robot location and heading are derived, in terms of uncertainty regions, under the hypothesis that errors affecting all sensors measurements are unknown-but-bounded. The proposed technique is validated in both simulation and experimental setups.  相似文献   

9.
针对传统的视觉伺服方法中图像几何特征的标记、提取与匹配过程复杂且通用性差等问题,本文提出了一种基于图像矩的机器人四自由度(4DOF)视觉伺服方法.首先建立了眼在手系统中图像矩与机器人位姿之间的非线性增量变换关系,为利用图像矩进行机器人视觉伺服控制提供了理论基础,然后在未对摄像机与手眼关系进行标定的情况下,利用反向传播(BP)神经网络的非线性映射特性设计了基于图像矩的机器人视觉伺服控制方案,最后用训练好的神经刚络进行了视觉伺服跟踪控制.实验结果表明基于本文算法可实现0.5 mm的位置与0.5°的姿态跟踪精度,验证了算法的的有效性与较好的伺服性能.  相似文献   

10.
提出一种新的移动机器人泊位方法. 该方法采用一幅预先采集的参考图像定义机器人的期望泊位状 态(期望的位置和方向),利用尺度不变特征变换(SIFT)算法和基于双向BBF 的特征匹配算法实现泊位站当前图 像与参考图像之间的匹配以获取视觉反馈信息,基于极点伺服策略根据参考图像准直机器人,采用质心跟踪法防止 目标图像出视场,采用RANSAC 算法求解当前图像与参考图像间的仿射变换,利用一个末段控制策略实现精确泊 位.本文方法不需要环境模型或人工标记.室内环境下的实验结果证实了该方法的有效性.  相似文献   

11.
上海天文台65米射电望远镜的副面调整机构为Stewart型并联机器人,为了及时发现该并联机器人因机械磨损或误差累积造成的精度下降问题,使用倾角传感器对并联机器人动平台姿态进行检测,求得动平台姿念均方根误差并将其与设计指标进行比较,从而用户可以判断是否需要进行维修或回零操作.为了提高并联机器人的易维护性,设计了光电传感器回零和磁尺(磁致伸缩位移传感器)回零两种回零方式,分析了两种回零方式以及通过回零操作对光电传感器和磁尺精度进行检测的原理.总结了该并联机器人需要进行回零操作的不同状况,并给出了相应的回零控制策略.实验证明本文提出的回零控制策略是解决并联机器人回零问题的一种有效方法.  相似文献   

12.
Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans   总被引:14,自引:0,他引:14  
A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot uses two-dimensional laser range scans for localization, it is difficult to accurately detect and localize landmarks in the environment (such as corners and occlusions) from the range scans.In this paper, we develop two new iterative algorithms to register a range scan to a previous scan so as to compute relative robot positions in an unknown environment, that avoid the above problems. The first algorithm is based on matching data points with tangent directions in two scans and minimizing a distance function in order to solve the displacement between the scans. The second algorithm establishes correspondences between points in the two scans and then solves the point-to-point least-squares problem to compute the relative pose of the two scans. Our methods work in curved environments and can handle partial occlusions by rejecting outliers.  相似文献   

13.
This paper presents a visual homing method for a robot moving on the ground plane. The approach employs a set of omnidirectional images acquired previously at different locations (including the goal position) in the environment, and the current image taken by the robot. We present as contribution a method to obtain the relative angles between all these locations, using the computation of the 1D trifocal tensor between views and an indirect angle estimation procedure. The tensor is particularly well suited for planar motion and provides important robustness properties to our technique. Another contribution of our paper is a new control law that uses the available angles, with no range information involved, to drive the robot to the goal. Therefore, our method takes advantage of the strengths of omnidirectional vision, which provides a wide field of view and very precise angular information. We present a formal proof of the stability of the proposed control law. The performance of our approach is illustrated through simulations and different sets of experiments with real images.  相似文献   

14.
It is known that most of the key problems in visual servo control of robots are related to the performance analysis of the system considering measurement and modeling errors. In this paper, the development and performance evaluation of a novel intelligent visual servo controller for a robot manipulator using neural network Reinforcement Learning is presented. By implementing machine learning techniques into the vision based control scheme, the robot is enabled to improve its performance online and to adapt to the changing conditions in the environment. Two different temporal difference algorithms (Q-learning and SARSA) coupled with neural networks are developed and tested through different visual control scenarios. A database of representative learning samples is employed so as to speed up the convergence of the neural network and real-time learning of robot behavior. Moreover, the visual servoing task is divided into two steps in order to ensure the visibility of the features: in the first step centering behavior of the robot is conducted using neural network Reinforcement Learning controller, while the second step involves switching control between the traditional Image Based Visual Servoing and the neural network Reinforcement Learning for enabling approaching behavior of the manipulator. The correction in robot motion is achieved with the definition of the areas of interest for the image features independently in both control steps. Various simulations are developed in order to present the robustness of the developed system regarding calibration error, modeling error, and image noise. In addition, a comparison with the traditional Image Based Visual Servoing is presented. Real world experiments on a robot manipulator with the low cost vision system demonstrate the effectiveness of the proposed approach.  相似文献   

15.
This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a Silicon Graphics Crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of optimal assignment algorithms, and we expect this to be a fruitful area for future research.  相似文献   

16.
Visual homing is the ability of an agent to return to a goal position by comparing the currently viewed image with an image captured at the goal, known as the snapshot image. In this paper we present additional mathematical justification and experimental results for the visual homing algorithm first presented in Churchill and Vardy (2008). This algorithm, known as Homing in Scale Space, is far less constrained than existing methods in that it can infer the direction of translation without any estimation of the direction of rotation. Thus, it does not require the current and snapshot images to be captured from the same orientation (a limitation of some existing methods). The algorithm is novel in its use of the scale change of SIFT features as an indication of the change in the feature’s distance from the robot. We present results on a variety of image databases and on live robot trials.  相似文献   

17.
This paper addresses the control issue for cooperative visual servoing manipulators on strongly connected graph with communication delays, in which case that the uncertain robot dynamics and kinematics, uncalibrated camera model, and actuator constraint are simultaneously considered. An adaptive cooperative image‐based approach is established to overcome the control difficulty arising from nonlinear coupling between visual model and robot agents. To estimate the coupled camera‐robot parameters, a novel adaptive strategy is developed and its superiority mainly lies in the containment of both individual image‐space errors and the synchronous errors among networked robots; thus, the cooperative performance is significantly strengthened. Moreover, the proposed cooperative controller with a Nussbaum‐type gain is implemented to both globally stabilize the closed‐loop systems and realize the synchronization control objective under the existence of unknown and time‐varying actuator constraint. Finally, simulations are carried out to validate the developed approach.  相似文献   

18.
Robot Homing by Exploiting Panoramic Vision   总被引:5,自引:0,他引:5  
We propose a novel, vision-based method for robot homing, the problem of computing a route so that a robot can return to its initial home position after the execution of an arbitrary prior path. The method assumes that the robot tracks visual features in panoramic views of the environment that it acquires as it moves. By exploiting only angular information regarding the tracked features, a local control strategy moves the robot between two positions, provided that there are at least three features that can be matched in the panoramas acquired at these positions. The strategy is successful when certain geometric constraints on the configuration of the two positions relative to the features are fulfilled. In order to achieve long-range homing, the features trajectories are organized in a visual memory during the execution of the prior path. When homing is initiated, the robot selects Milestone Positions (MPs) on the prior path by exploiting information in its visual memory. The MP selection process aims at picking positions that guarantee the success of the local control strategy between two consecutive MPs. The sequential visit of successive MPs successfully guides the robot even if the visual context in the home position is radically different from the visual context at the position where homing was initiated. Experimental results from a prototype implementation of the method demonstrate that homing can be achieved with high accuracy, independent of the distance traveled by the robot. The contribution of this work is that it shows how a complex navigational task such as homing can be accomplished efficiently, robustly and in real-time by exploiting primitive visual cues. Such cues carry implicit information regarding the 3D structure of the environment. Thus, the computation of explicit range information and the existence of a geometric map are not required.  相似文献   

19.
This paper proposes an adaptive robust fuzzy control scheme for path tracking of a wheeled mobile robot with uncertainties. The robot dynamics including the actuator dynamics is considered in this work. The presented controller is composed of a fuzzy basis function network (FBFN) to approximate an unknown nonlinear function of the robot complete dynamics, an adaptive robust input to overcome the uncertainties, and a stabilizing control input. The stability and the convergence of the tracking errors are guaranteed using the Lyapunov stability theory. When the controller is designed, the different parameters for two actuator models in the dynamic equation are taken into account. The proposed control scheme does not require the accurate parameter values for the actuator parameters as well as the robot parameters. The validity and robustness of the proposed control scheme are demonstrated through computer simulations. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

20.
This work refers to the problem of controlling robot motion and force in frictional contacts under environmental errors and particularly orientation errors that distort the desired control targets and control subspaces. The proposed method uses online estimates of the surface normal (tangent) direction to dynamically modify the control target and control space decomposition. It is proved that these estimates converge to the actual value even though the elasticity and friction parameters are unknown. The proposed control solution is demonstrated through simulation examples in three-dimensional robot motion tasks contacting both planar and curved surfaces.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号