首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Owing to the upcoming applications in the field of service robotics mobile robots are currently receiving increasing attention in industry and the scientific community. Applications in the area of service robotics demand a high degree of system autonomy, which robots without learning capabilities will not be able to meet. Learning is required in the context of action models and appropriate perception procedures. In both areas flexible adaptivity is difficult to achieve especially when high bandwidth sensors (e.g. video cameras) - which are needed in the envisioned unstructured worlds - are used. This paper proposes a new methodology for image-based navigation using a self-organized visual representation of the environment. Self-organization leads to internal representations, which can be used by the robot, but are not transparent to the user. It is shown how this conceptual gap can be bridged.  相似文献   

2.
This article deals with the development of learning methods for an intelligent control system for an autonomous mobile robot. On the basis of visual servoing, an approach to learning the skill of tracking colored guidelines is proposed. This approach utilizes a robust and adaptive image processing method to acquire features of the colored guidelines and convert them into the controller input. The supervised learning procedure and the neural network controller are discussed. The method of obtaining the learning data and training the neural network are described. Experimental results are presented at the end of the article. This work was presented, in part, at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, Japan, January 15–17, 2001  相似文献   

3.
4.
When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework for wheeled mobile robots presented in this paper is based on this assumption. During a human-guided learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint. Based on the regulation of successive homographies, this control guides the robot along the reference visual route without explicitly planning any trajectory. The proposed framework has been designed for the entire class of central catadioptric cameras (including conventional cameras). It has been validated onto two architectures. In the first one, algorithms have been implemented onto a dedicated hardware and the robot is equipped with a standard perspective camera. In the second one, they have been implemented on a standard PC and an omnidirectional camera is considered.
Youcef MezouarEmail:
  相似文献   

5.
This paper proposes aDistributed Vision System (DVS), which is an intelligent infrastructure for mobile robots consisting of manyvision agents (VAs) embedded in an environment. The system monitors the environment from various viewing points with the VAs, maintains the dynamic environment models, and provides various information to robots. Based on this concept, we have developed a prototype of the DVS which consists of sixteen vision agents and simultaneously navigates two robots in a model town.  相似文献   

6.
Realizing steady and reliable navigation is a prerequisite for a mobile robot, but this facility is often weakened by an unavoidable slip or some irreparable drift errors of sensors in long-distance navigation. Although perceptual landmarks were solutions to such problems, it is impossible not to miss landmarks occasionally at some specific spots when the robot moves at different speeds, especially at higher speeds. If the landmarks are put at random intervals, or if the illumination conditions are not good, the landmarks will be easier to miss. In order to detect and extract artificial landmarks robustly under multiple illumination conditions, some low-level but robust image processing techniques were implemented. The moving speed and self-location were controlled by the visual servo control method. In cases where a robot suddenly misses some specific landmarks when it is moving, it will find them again in a short time based on its intelligence and the inertia of the previous search motion. These methods were verified by the reliable vision-based indoor navigation of an A-life mobile robot.This work was presented in part at the 8th International Symposium on Artificial Life and Robotics, Oita, Japan, January 24–26, 2003  相似文献   

7.
Visual Navigation for Mobile Robots: A Survey   总被引:4,自引:0,他引:4  
Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work, from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... The recent concept of visual sonar has also been revised. This work is partially supported by DPI 2005-09001-C03-02 and FEDER funding.  相似文献   

8.
Most localization algorithms are either range-based or vision-based, but the use of only one type of sensor cannot often ensure successful localization. This paper proposes a particle filter-based localization method that combines the range information obtained from a low-cost IR scanner with the SIFT-based visual information obtained from a monocular camera to robustly estimate the robot pose. The rough estimation of the robot pose by the range sensor can be compensated by the visual information given by the camera and the slow visual object recognition can be overcome by the frequent updates of the range information. Although the bandwidths of the two sensors are different, they can be synchronized by using the encoder information of the mobile robot. Therefore, all data from both sensors are used to estimate the robot pose without time delay and the samples used for estimating the robot pose converge faster than those from either range-based or vision-based localization. This paper also suggests a method for evaluating the state of localization based on the normalized probability of a vision sensor model. Various experiments show that the proposed algorithm can reliably estimate the robot pose in various indoor environments and can recover the robot pose upon incorrect localization. Recommended by Editorial Board member Sooyong Lee under the direction of Editor Hyun Seok Yang. This research was conducted by the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs funded by the Ministry of Knowledge Economy of Korea. Yong-Ju Lee received the B.S. degree in Mechanical Engineering from Korea University in 2004. He is now a Student for Ph.D. of Mechanical Engineering from Korea University. His research interests include mobile robotics. Byung-Doo Yim received the B.S. degree in Control and Instrumentation Engineering from Seoul National University of Technology in 2005. Also, he received the M.S. degree in Mechatroncis Engineering from Korea University in 2007. His research interests include mobile robotics. Jae-Bok Song received the B.S. and M.S. degrees in Mechanical Engineering from Seoul National University in 1983 and 1985, respectively. Also, he received the Ph.D. degree in Mechanical Engineering from MIT in 1992. He is currently a Professor of Mechanical Engineering, Korea University, where he is also the Director of the Intelligent Robotics Laboratory from 1993. His current research interests lie mainly in mobile robotics, safe robot arms, and design/control of intelligent robotic systems.  相似文献   

9.
根据移动服务机器人的运动学模型和针孔摄像机成像原理,完成了机器人的速度向量由视觉空间到任务空间的变换。采用视觉伺服控制方式,结合反步设计思想,设计了具有全局渐近稳定的轨迹跟踪控制器,并利用Lyapunov函数进行稳定性分析。仿真结果验证了所设计控制器的有效性和正确性。  相似文献   

10.
A vision-based navigation system is presented for determining a mobile robot's position and orientation using panoramic imagery. Omni-directional sensors are useful in obtaining a 360° field of view, permitting various objects in the vicinity of a robot to be imaged simultaneously. Recognizing landmarks in a panoramic image from an a priori model of distinct features in an environment allows a robot's location information to be updated. A system is shown for tracking vertex and line features for omni-directional cameras constructed with catadioptric (containing both mirrors and lenses) optics. With the aid of the panoramic Hough transform, line features can be tracked without restricting the mirror geometry so that it satisfies the single viewpoint criteria. This allows the use of rectangular scene features to be used as landmarks. Two paradigms for localization are explored, with experiments conducted with synthetic and real images. A working implementation on a mobile robot is also shown.  相似文献   

11.
Mobile robot programming using natural language   总被引:3,自引:0,他引:3  
How will naive users program domestic robots? This paper describes the design of a practical system that uses natural language to teach a vision-based robot how to navigate in a miniature town. To enable unconstrained speech the robot is provided with a set of primitive procedures derived from a corpus of route instructions. When the user refers to a route that is not known to the robot, the system will learn it by combining primitives as instructed by the user. This paper describes the components of the Instruction-Based Learning architecture and discusses issues of knowledge representation, the selection of primitives and the conversion of natural language into robot-understandable procedures.  相似文献   

12.
The paper describes a visual method for the navigation of autonomous floor-cleaning robots. The method constructs a topological map with metrical information where place nodes are characterized by panoramic images and by particle clouds representing position estimates. Current image and position estimate of the robot are interrelated to landmark images and position estimates stored in the map nodes through a holistic visual homing method which provides bearing and orientation estimates. Based on these estimates, a position estimate of the robot is updated by a particle filter. The robot’s position estimates are used to guide the robot along parallel, meandering lanes and are also assigned to newly created map nodes which later serve as landmarks. Computer simulations and robot experiments confirm that the robot position estimate obtained by this method is sufficiently accurate to keep the robot on parallel lanes, even in the presence of large random and systematic odometry errors. This ensures an efficient cleaning behavior with almost complete coverage of a rectangular area and only small repeated coverage. Furthermore, the topological-metrical map can be used to completely cover rooms or apartments by multiple meander parts.  相似文献   

13.
This paper proposes a new technique for vision-based robot navigation. The basic framework is to localise the robot by comparing images taken at its current location with reference images stored in its memory. In this work, the only sensor mounted on the robot is an omnidirectional camera. The Fourier components of the omnidirectional image provide a signature for the views acquired by the robot and can be used to simplify the solution to the robot navigation problem. The proposed system can calculate the robot position with variable accuracy (‘hierarchical localisation’) saving computational time when the robot does not need a precise localisation (e.g. when it is travelling through a clear space). In addition, the system is able to self-organise its visual memory of the environment. The self-organisation of visual memory is essential to realise a fully autonomous robot that is able to navigate in an unexplored environment. Experimental evidence of the robustness of this system is given in unmodified office environments.  相似文献   

14.
This paper establishes control strategies for wheeled mobile robots which are subjected to nonholonomic constraints. The mobile robot model includes the kinematic and dynamic equations of motion and the actuator dynamics. Using the notion of virtual vehicle and the concept of flatness, and applying the backstepping methodology the paper proposes control schemes for trajectory tracking for the considered augmented model of the mobile robot. The resulting controller ensures exponential convergence to a desired trajectory. Applications of the tracking controller for convoy-like vehicles governed by the augmented models are considered as well. Simulation results and lab experiments are demonstrated.  相似文献   

15.
This paper presents a way of implementing a model-based predictive controller (MBPC) for mobile robot navigation when unexpected static obstacles are present in the robot environment. The method uses a nonlinear model of mobile robot dynamics, and thus allows an accurate prediction of the future trajectories. An ultrasonic ranging system has been used for obstacle detection. A multilayer perceptron is used to implement the MBPC, allowing real-time implementation and also eliminating the need for high-level data sensor processing. The perceptron has been trained in a supervised manner to reproduce the MBPC behaviour. Experimental results obtained when applying the neural-network controller to a TRC Labmate mobile robot are given in the paper.  相似文献   

16.
This article deals with handling unknown factors, such as external disturbance and unknown dynamics, for mobile robot control. We propose a radial-basis function (RBF) network-based controller to compensate for these. The stability of the proposed controller is proven using the Lyapunov function. To show the effectiveness of the proposed controller, several simulation results are presented. Through the simulations, we show that the proposed controller can overcome the modelling uncertainty and the disturbances. The proposed RBF controller also outperforms previous work from the viewpoint of computation time, which is a crucial fact for real-time applications.This work was presented in part at the 8th International Symposium on Artificial Life and Robotics, Oita, Japan, January 24–26, 2003  相似文献   

17.
一处室内轮式自主移动机器人的导航控制研究   总被引:2,自引:0,他引:2  
介绍了一种室内移动机器人CASIA-I.对该机器人的运动机构做了较为详细地阐述,针对该运动机构给出了机器人的运动方程和一种导航控制算法,并根据该算法进行了软件仿真和实物实验.在软件仿真和实物实验两种环境下,机器人都能够实时避开障碍物奔向目标.仿真和实验表明:该移动平台具有良好的可靠性,且该导航控制算法是一种有效的导航算法.  相似文献   

18.
In this paper, an evolutionary approach to solve the mobile robot path planning problem is proposed. The proposed approach combines the artificial bee colony algorithm as a local search procedure and the evolutionary programming algorithm to refine the feasible path found by a set of local procedures. The proposed method is compared to a classical probabilistic roadmap method (PRM) with respect to their planning performances on a set of benchmark problems and it exhibits a better performance. Criteria used to measure planning effectiveness include the path length, the smoothness of planned paths, the computation time and the success rate in planning. Experiments to demonstrate the statistical significance of the improvements achieved by the proposed method are also shown.  相似文献   

19.
We propose a sensor-fusion technique where the data sets for previous moments are properly transformed and fused into the current data sets to allow accurate measurements, such as the distance to an obstacle or the location of the service robot itself. In conventional fusion schemes, measurements are dependent on the current data sets. As a result, more sensors are required to measure a certain physical parameter or to improve the accuracy of a measurement. However, in this approach, instead of adding more sensors to the system, the temporal sequences of the data sets are stored and utilized to improve the measurements. The theoretical basis is illustrated by examples, and the effectiveness is proved through simulations. Finally, the new space and time sensor fusion (STSF) scheme is applied to the control of a mobile robot in an unstructured environment and a structured environment.This work was presented in part at the 8th International Symposium on Artificial Life and Robotics, Oita, Japan, January 24–26, 2003  相似文献   

20.
Visual tracking of a moving target using active contour based SSD algorithm   总被引:1,自引:0,他引:1  
This paper presents a new image based visual tracking scheme for a mobile robot to trace a moving target using a single camera mounted on the mobile robot. To accurately estimate the position of the target in the next image, it decomposes the effect of the camera motion on the velocity vector of the target in the image frame. Based on the estimated velocity of the target and the image Jacobian, the control inputs of the mobile robot are determined in such a way that the target may appear inside the central area of the image frame. Since the shape of the target in the image frame varies due to rotation and translation of the target, a new shape adaptive Sum-of-Squared Difference (SSD) algorithm is proposed which uses the extended snake algorithm to extract the contour of the target and updates the template in every step of the matching process. The proposed scheme has been implemented using a Nomad Scout Robot II. The experimental results have shown that the proposed scheme follows the target within a negligible error range even when the target is temporarily lost due to various reasons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号