首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Q. Lin  C. Kuo 《Virtual Reality》1998,3(4):267-277
Efficient teleoperation of underwater robot requires clear 3D visual information of the robot's spatial location and its surrounding environment. However, the performance of existing telepresence systems is far from satisfactory. In this paper, we present our virtual telepresence system for assisting tele-operation of an underwater robot. This virtual environment-based telepresence system transforms robot sensor data into 3D synthetic visual information of the workplace based on its geometrical model. It provides the operators with a full perception of the robot's spatial location. In addition, we propose a robot safety domain to overcome the robot's location offset in the virtual environment caused by its sensor errors. The software design of the system and how a safety domain can be used to overcome robot location offset in virtual environment will be examined. Experimental tests and its result analysis will also be presented in this paper.  相似文献   

3.
Adaptive mapping and navigation by teams of simple robots   总被引:1,自引:0,他引:1  
We present a technique for mapping an unknown environment and navigating through it using a team of simple robots. Minimal assumptions are made about the abilities of the robots on a team. We assume only that robots can explore the environment using a random walk, detect the goal location, and communicate among themselves by transmitting a single small integer over a limited distance and in a direct line of sight; additionally, one designated robot, the navigator, can track toward a team member when it is nearby and in a direct line of sight. We do not assume that robots can determine their absolute (x, y) positions in the environment to be mapped, determine their positions relative to other team members, or sense anything other than the goal location and the transmissions of their teammates. In spite of these restrictive assumptions, we show that for moderate-sized teams in complex environments the time needed to construct a map and then navigate to a goal location can be competitive with the time needed to navigate to the goal along an optimal path formed with perfect knowledge of the environment. In other words, collective mapping enables navigation in an unmapped environment with only modest overhead. This basic result holds over a wide range of assumptions about robot reliability, sensor range, tracking ability.

We then describe an extended mapping algorithm that allows an existing map to be efficiently corrected when a goal location changes. We show that a robot team using the algorithm is adaptive, in the sense that its performance will improve over time, whenever navigation goals follow certain regular patterns.  相似文献   


4.
The development of robots that learn from experience is a relentless challenge confronting artificial intelligence today. This paper describes a robot learning method which enables a mobile robot to simultaneously acquire the ability to avoid objects, follow walls, seek goals and control its velocity as a result of interacting with the environment without human assistance. The robot acquires these behaviors by learning how fast it should move along predefined trajectories with respect to the current state of the input vector. This enables the robot to perform object avoidance, wall following and goal seeking behaviors by choosing to follow fast trajectories near: the forward direction, the closest object or the goal location respectively. Learning trajectory velocities can be done relatively quickly because the required knowledge can be obtained from the robot's interactions with the environment without incurring the credit assignment problem. We provide experimental results to verify our robot learning method by using a mobile robot to simultaneously acquire all three behaviors.  相似文献   

5.
In this article, we present a cognitive system based on artificial curiosity for high-level knowledge acquisition from visual patterns. The curiosity (perceptual curiosity and epistemic curiosity) is realized through combining perceptual saliency detection and Machine-Learning based approaches. The learning is accomplished by autonomous observation of visual patterns and by interaction with an expert (a human tutor) detaining semantic knowledge about the detected visual patterns. Experimental results validating the deployment of the investigated system have been obtained on the basis of a humanoid robot acquiring visually knowledge from its surrounding environment interacting with a human tutor. We show that our cognitive system allows the humanoid robot to discover the surrounding world in which it evolves, to learn new knowledge about it and describe it using human-like (natural) utterances.  相似文献   

6.
This paper describes the navigation system used by J Edgar, a small vision-guided robot that roams the corridors at the University of Melbourne. Given a model of the environment, J Edgar is able to find its way to any given location, even when its initial location is completely unknown. To do this, the robot makes use of a probabilistic localisation technique, in which the robot maintains a probability distribution over the space of all possible robot locations. Given this distribution, the robot then applies a particular navigation strategy, whose aim is to ensure that the robot not only reaches the goal, but knows that it has reached the goal.  相似文献   

7.
He  Yanlin  Zhu  Lianqing  Sun  Guangkai  Qiao  Junfei 《Microsystem Technologies》2019,25(2):573-585

With the goal of supporting localization requirements of our spherical underwater robots, such as multi robot cooperation and intelligent biological surveillance, a cooperative localization system of multi robot was designed and implemented in this study. Given the restrictions presented by the underwater environment and the small-sized spherical robot, an time of flight camera and microelectro mechanical systems (MEMS) sensor information fusion algorithm using coordinate normalization transfer models were adopted to construct the proposed system. To handle the problem of short location distance, limited range under fixed view of camera in the underwater environment, a MEMS inertial sensor was used to obtain the attitude information of robot and expanding the range of underwater visual positioning, the transmission of positioning information could implement through the normalization of absolute coordinate, then the positioning distance increased and realized the localization of multi robot system. Given the environmental disturbances in practical underwater scenarios, the Kalman filter model was used to minimizing the systematic positioning error. Based on the theoretical analysis and calculation, we describe experiments in underwater to evaluate the performance of cooperative localization. The experimental results confirmed the validity of the multi robot cooperative localization system proposed in this paper, and the distance of cooperative localization system proposed in this paper is larger than the visual positioning system we have developed previously.

  相似文献   

8.
Indoor environments can typically be divided into places with different functionalities like corridors, rooms or doorways. The ability to learn such semantic categories from sensor data enables a mobile robot to extend the representation of the environment facilitating interaction with humans. As an example, natural language terms like “corridor” or “room” can be used to communicate the position of the robot in a map in a more intuitive way. In this work, we first propose an approach based on supervised learning to classify the pose of a mobile robot into semantic classes. Our method uses AdaBoost to boost simple features extracted from sensor range data into a strong classifier. We present two main applications of this approach. Firstly, we show how our approach can be utilized by a moving robot for an online classification of the poses traversed along its path using a hidden Markov model. In this case we additionally use as features objects extracted from images. Secondly, we introduce an approach to learn topological maps from geometric maps by applying our semantic classification procedure in combination with a probabilistic relaxation method. Alternatively, we apply associative Markov networks to classify geometric maps and compare the results with a relaxation approach. Experimental results obtained in simulation and with real robots demonstrate the effectiveness of our approach in various indoor environments.  相似文献   

9.
This paper presents a technical approach to robot learning of motor skills which combines active intrinsically motivated learning with imitation learning. Our algorithmic architecture, called SGIM-D, allows efficient learning of high-dimensional continuous sensorimotor inverse models in robots, and in particular learns distributions of parameterised motor policies that solve a corresponding distribution of parameterised goals/tasks. This is made possible by the technical integration of imitation learning techniques within an algorithm for learning inverse models that relies on active goal babbling. After reviewing social learning and intrinsic motivation approaches to action learning, we describe the general framework of our algorithm, before detailing its architecture. In an experiment where a robot arm has to learn to use a flexible fishing line, we illustrate that SGIM-D efficiently combines the advantages of social learning and intrinsic motivation and benefits from human demonstration properties to learn how to produce varied outcomes in the environment, while developing more precise control policies in large spaces.  相似文献   

10.
基于目标导向行为和空间拓扑记忆的视觉导航方法   总被引:1,自引:0,他引:1  
针对在具有动态因素且视觉丰富环境中的导航问题,受路标机制空间记忆方式启发,提出一种可同步学习目标导向行为和记忆空间结构的视觉导航方法.首先,为直接从原始输入中学习控制策略,以深度强化学习为基本导航框架,同时添加碰撞预测作为模型辅助任务;然后,在智能体学习导航过程中,利用时间相关性网络祛除冗余观测及寻找导航节点,实现通过情景记忆递增描述环境结构;最后,将空间拓扑地图作为路径规划模块集成到模型中,并结合动作网络用于获取更加通用的导航方法.实验在3D仿真环境DMlab中进行,实验结果表明,本文方法可从视觉输入中学习目标导向行为,在所有测试环境中均展现出更高效的学习方法和导航策略,同时减少构建地图所需数据量;而在包含动态堵塞的环境中,该模型可使用拓扑地图动态规划路径,从而引导绕路行为完成导航任务,展现出良好的环境适应性.  相似文献   

11.
In this paper, an experimental study of a navigation system that allows a mobile robot to travel in an environment about which it has no prior knowledge is described. Data from multiple ultrasonic range sensors are fused into a representation called Heuristic Asymmetric Mapping to deal with the problem of uncertainties in the raw sensory data caused mainly by the transducer's beam-opening angle and specular reflections. It features a fast data-refresh rate to handle a dynamic environment. Potential-field method is used for on-line path planning based on the constructed gridtype sonar map. The mobile robot can therefore learn to find a safe path according to its self-built sonar map. To solve the problem of local minima in conventional potential field method, a new type of potential function is formulated. This new method is simple and fast in execution using the concept from distance-transform path-finding algorithms. The developed navigation system has been tested on our experimental mobile robot to demonstrate its possible application in practical situations. Several interesting simulation and experimental results are presented.This work was supported partly by the National Science Council of Taiwan, ROC under the grant NSC-82-0422-E-009-321.  相似文献   

12.
We use dynamical neural networks based on the neural field formalism for the control of a mobile robot. The robot navigates in an open environment and is able to plan a path for reaching a particular goal. We will describe how this dynamical approach may be used by a high level system (planning) for controlling a low level behavior (speed of the robot). We give also results about the control of the orientation of a camera and a robot body.  相似文献   

13.
We present a novel method for a robot to interactively learn, while executing, a joint human–robot task. We consider collaborative tasks realized by a team of a human operator and a robot helper that adapts to the human’s task execution preferences. Different human operators can have different abilities, experiences, and personal preferences so that a particular allocation of activities in the team is preferred over another. Our main goal is to have the robot learn the task and the preferences of the user to provide a more efficient and acceptable joint task execution. We cast concurrent multi-agent collaboration as a semi-Markov decision process and show how to model the team behavior and learn the expected robot behavior. We further propose an interactive learning framework and we evaluate it both in simulation and on a real robotic setup to show the system can effectively learn and adapt to human expectations.  相似文献   

14.
15.
This paper can be divided into two main parts. Both of them describe one of those parts of an intelligent robot control system, which makes the system capable to interact with its environment via visual information. The two parts can be handled as complementary parts of each other. The first part shows the solution of the data extraction from the visual information. This is a well known machine vision problem, in this case, the applied method is a passive stereo type. The second part explains the method of visual representation of the data. This is a visualization problem, in this case, a virtual reality system is used. With the use of the detailed graphic models of VR, efficient off-line robot programming and simulation is available.  相似文献   

16.
To address the problem of estimating the effects of unknown tools, we propose a novel concept of tool representation based on the functional features of the tool. We argue that functional features remain distinctive and invariant across different tools used for performing similar tasks. Such a representation can be used to estimate the effects of unknown tools that share similar functional features. To learn the usages of tools to physically alter the environment, a robot should be able to reason about its capability to act, the representation of available tools, and effect of manipulating tools. To enable a robot to perform such reasoning, we present a novel approach, called Tool Affordances, to learn bi-directional causal relationships between actions, functional features and the effects of tools. A Bayesian network is used to model tool affordances because of its capability to model probabilistic dependencies between data. To evaluate the learnt tool affordances, we conducted an inference test in which a robot inferred suitable functional features to realize certain effects (including novel effects) from the given action. The results show that the generalization of functional features enables the robot to estimate the effects of unknown tools that have similar functional features. We validate the accuracy of estimation by error analysis.  相似文献   

17.
18.
Teleoperation allows humans to reach environments that would otherwise be too difficult or dangerous. The distance between the human operator and remote robot introduces a number of issues that can negatively impact system performance including degraded and delayed information exchange between the robot and human. Some operation scenarios and environments can tolerate these degraded conditions, while others cannot. However, little work has been done to investigate how factors such as communication delay, automation, and environment characteristics interact to affect teleoperation system performance. This paper presents results from a user study analyzing the effects of teleoperation factors including communication delay, autonomous assistance, and environment layout on user performance. A mobile robot driving task is considered in which subjects drive a robot to a goal location around obstacles as quickly (minimize time) and safely (avoid collisions) as possible. An environment difficulty index (ID) is defined in the paper and is shown to be able to predict the average time it takes for the human to drive the robot to a goal location with different obstacle configurations. The ID is also shown to predict the path chosen by the human better than travel time along that path.  相似文献   

19.
20.
We present a new approach to visual feedback control using image-based visual servoing with stereo vision. In order to control the position and orientation of a robot with respect to an object, a new technique is proposed using binocular stereo vision. The stereo vision enables us to calculate an exact image Jacobian not only at around a desired location, but also at other locations. The suggested technique can guide a robot manipulator to the desired location without needing such a priori knowledge as the relative distance to the desired location or a model of the object, even if the initial positioning error is large. We describes a model of stereo vision and how to generate feedback commands. The performance of the proposed visual servoing system is illustrated by a simulation and by experimental results, and compared with the conventional method for an assembling robot. This work was presented in part at the Fourth International Symposium on Artificial Life and Robotics, Oita, Japan, January 19–22, 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号