首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Advanced Robotics》2013,27(1):83-99
Reinforcement learning can be an adaptive and flexible control method for autonomous system. It does not need a priori knowledge; behaviors to accomplish given tasks are obtained automatically by repeating trial and error. However, with increasing complexity of the system, the learning costs are increased exponentially. Thus, application to complex systems, like a many redundant d.o.f. robot and multi-agent system, is very difficult. In the previous works in this field, applications were restricted to simple robots and small multi-agent systems, and because of restricted functions of the simple systems that have less redundancy, effectiveness of reinforcement learning is restricted. In our previous works, we had taken these problems into consideration and had proposed new reinforcement learning algorithm, 'Q-learning with dynamic structuring of exploration space based on GA (QDSEGA)'. Effectiveness of QDSEGA for redundant robots has been demonstrated using a 12-legged robot and a 50-link manipulator. However, previous works on QDSEGA were restricted to redundant robots and it was impossible to apply it to multi mobile robots. In this paper, we extend our previous work on QDSEGA by combining a rule-based distributed control and propose a hybrid autonomous control method for multi mobile robots. To demonstrate the effectiveness of the proposed method, simulations of a transportation task by 10 mobile robots are carried out. As a result, effective behaviors have been obtained.  相似文献   

2.
Currently when path planning is used in SLAM it is to benefit SLAM only, with no mutual benefit for path planning. Furthermore, SLAM algorithms are generally implemented and modified for individual heterogeneous robotic platforms without autonomous means of sharing navigation information. This limits the ability for robot platforms to share navigation information and can require heterogeneous robot platforms to generate individual maps within the same environment. This paper introduces Learned Action SLAM, which for the first time autonomously combines path-planning with SLAM such that heterogeneous robots can share learnt knowledge through Learning Classifier Systems (LCS). This is in contrast to Active SLAM, where path-planning is used to benefit SLAM only. Results from testing LA-SLAM on robots in the real world have shown; promise for use on teams of robots with various sensor morphologies, implications for scaling to associated domains, and ability to share maps taken from less capable to more advanced robots.  相似文献   

3.
《Advanced Robotics》2013,27(1-2):93-117
Emergencies in industrial warehouses are a major concern for fire-fighters. The large dimensions, together with the development of dense smoke that drastically reduces visibility, represent major challenges. The GUARDIANS robot swarm is designed to assist fire-fighters in searching a large warehouse. In this paper we discuss the technology developed for a swarm of robots assisting fire-fighters. We explain the swarming algorithms that provide the functionality by which the robots react to and follow humans while no communication is required. Next we discuss the wireless communication system, which is a so-called mobile ad-hoc network. The communication network provides also the means to locate the robots and humans. Thus, the robot swarm is able to provide guidance information to the humans. Together with the fire-fighters we explored how the robot swarm should feed information back to the human fire-fighter. We have designed and experimented with interfaces for presenting swarm-based information to human beings.  相似文献   

4.
戴丽珍  杨刚  阮晓钢 《自动化学报》2014,40(9):1951-1957
以两轮机器人的自主平衡学习控制为研究对象,针对传统控制方法无法实现机器人类似人或动物的渐进学习过程,依据斯金纳的操作条件反射理论建立了一种自治操作条件反射自动机(Autonomous operant conditioning automaton,AOCA)模型,设计一种基于AOCA的仿生学习算法,并进行机器人姿态平衡学习实验仿真研究. 实验结果表明,基于AOCA的仿生学习方法能有效地实现机器人的自主平衡学习控制,机器人系统的平衡能力在学习控制过程中自组织地渐进形成,并得以发展和完善.  相似文献   

5.
Humans can learn a language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form symbol systems and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted regarding the construction of robotic systems and machine learning methods that can learn a language through embodied multimodal interaction with their environment and other systems. Understanding human?-social interactions and developing a robot that can smoothly communicate with human users in the long term require an understanding of the dynamics of symbol systems. The embodied cognition and social interaction of participants gradually alter a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER represents a constructive approach towards a symbol emergence system. The symbol emergence system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e. humans and developmental robots. In this paper, specifically, we describe some state-of-art research topics concerning SER, such as multimodal categorization, word discovery, and double articulation analysis. They enable robots to discover words and their embodied meanings from raw sensory-motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions for research in SER.  相似文献   

6.
Deploying autonomous robot teams instead of humans in hazardous search and rescue missions could provide immeasurable benefits. In such operations, rescue workers often face environments where information about the physical conditions is impossible to obtain, which not only hampers the efficiency and effectiveness of the effort, but also places the rescuers in life-threatening situations. These types of risk promote the potential for using robot search teams in place of humans. This article presents the design and implementation of controllers to provide robots with appropriate behavior. The effective utilization of genetic algorithms to evolve controllers for teams of homogeneous autonomous robots for area coverage in search and rescue missions is described, along with a presentation of a robotic simulation program which was designed and developed. The main objective of this study was to contribute to efforts which attempt to implement real-world robotic solutions for search and rescue missions.  相似文献   

7.
Mobile robots must cope with uncertainty from many sources along the path from interpreting raw sensor inputs to behavior selection to execution of the resulting primitive actions. This article identifies several such sources and introduces methods for (i) reducing uncertainty and (ii) making decisions in the face of uncertainty. We present a complete vision-based robotic system that includes several algorithms for learning models that are useful and necessary for planning, and then place particular emphasis on the planning and decision-making capabilities of the robot. Specifically, we present models for autonomous color calibration, autonomous sensor and actuator modeling, and an adaptation of particle filtering for improved localization on legged robots. These contributions enable effective planning under uncertainty for robots engaged in goal-oriented behavior within a dynamic, collaborative and adversarial environment. Each of our algorithms is fully implemented and tested on a commercial off-the-shelf vision-based quadruped robot.  相似文献   

8.
基于KQML语言的多自主移动机器人仿真系统   总被引:4,自引:0,他引:4  
刘淑华  田彦涛 《机器人》2005,27(4):350-353
用JAVA语言开发了栅格环境下的多自主移动机器人仿真系统,通过KQML语言通信模拟了多个自主的移动机器人,机器人的自主性主要体现在自主感知环境和自主进行路径规划、任务执行和安全导航等工作.该仿真系统具有平台无关性、地图无关性、算法无关性以及机器人配置的无关性,为多自主机器人系统的研究提供了一个可借鉴的平台.  相似文献   

9.
This article describes the simulation of distributed autonomous robots for search and rescue operations. The simulation system is utilized to perform experiments with various control strategies for the robot team and team organizations, evaluating the comparative performance of the strategies and organizations. The objective of the robot team is to, once deployed in an environment (floor-plan) with multiple rooms, cover as many rooms as possible. The simulated robots are capable of navigation through the environment, and can communicate using simple messages. The simulator maintains the world, provides each robot with sensory information, and carries out the actions of the robots. The simulator keeps track of the rooms visited by robots and the elapsed time, in order to evaluate the performance of the robot teams. The robot teams are composed of homogenous robots, i.e., identical control strategies are used to generate the behavior of each robot in the team. The ability to deploy autonomous robots, as opposed to humans, in hazardous search and rescue missions could provide immeasurable benefits.  相似文献   

10.
With the advancements in technology, robots have gradually replaced humans in different aspects. Allowing robots to handle multiple situations simultaneously and perform different actions depending on the situation has since become a critical topic. Currently, training a robot to perform a designated action is considered an easy task. However, when a robot is required to perform actions in different environments, both resetting and retraining are required, which are time-consuming and inefficient. Therefore, allowing robots to autonomously identify their environment can significantly reduce the time consumed. How to employ machine learning algorithms to achieve autonomous robot learning has formed a research trend in current studies. In this study, to solve the aforementioned problem, a proximal policy optimization algorithm was used to allow a robot to conduct self-training and select an optimal gait pattern to reach its destination successfully. Multiple basic gait patterns were selected, and information-maximizing generative adversarial nets were used to generate gait patterns and allow the robot to choose from numerous gait patterns while walking. The experimental results indicated that, after self-learning, the robot successfully made different choices depending on the situation, verifying this approach’s feasibility.  相似文献   

11.
Navigation is a basic skill for autonomous robots. In the last years human–robot interaction has become an important research field that spans all of the robot capabilities including perception, reasoning, learning, manipulation and navigation. For navigation, the presence of humans requires novel approaches that take into account the constraints of human comfort as well as social rules. Besides these constraints, putting robots among humans opens new interaction possibilities for robots, also for navigation tasks, such as robot guides. This paper provides a survey of existing approaches to human-aware navigation and offers a general classification scheme for the presented methods.  相似文献   

12.
This paper describes an autonomous free-floating robot system that was designed to investigate the behavior of free-floating robots that are involved in the capture of satellites in space. The robot is used as a test bed for algorithms that have been developed for efficient and autonomous capture of objects in space. The robot is a completely autonomous system running under a real-time operating system. It is equipped with two three-degree-of-freedom arms, a three-axis thruster system and a fast communications module. The robot works in conjunction with a host computer. The host computer is used to process the capture algorithms and the robot implements the results in real time. The entire system provides a test bed for algorithms developed for optimal capture of objects in space.  相似文献   

13.

The rapid advances in Artificial Intelligence and Robotics will have a profound impact on society as they will interfere with the people and their interactions. Intelligent autonomous robots, independent if they are humanoid/anthropomorphic or not, will have a physical presence, make autonomous decisions, and interact with all stakeholders in the society, in yet unforeseen manners. The symbiosis with such sophisticated robots may lead to a fundamental civilizational shift, with far-reaching effects as philosophical, legal, and societal questions on consciousness, citizenship, rights, and legal entity of robots are raised. The aim of this work is to understand the broad scope of potential issues pertaining to law and society through the investigation of the interplay of law, robots, and society via different angles such as law, social, economic, gender, and ethical perspectives. The results make it evident that in an era of symbiosis with intelligent autonomous robots, the law systems, as well as society, are not prepared for their prevalence. Therefore, it is now the time to start a multi-disciplinary stakeholder discussion and derive the necessary policies, frameworks, and roadmaps for the most eminent issues.

  相似文献   

14.
Recent developments in sensor technology have made it feasible to use mobile robots in several fields, but robots still lack the ability to accurately sense the environment. A major challenge to the widespread deployment of mobile robots is the ability to function autonomously, learning useful models of environmental features, recognizing environmental changes, and adapting the learned models in response to such changes. This article focuses on such learning and adaptation in the context of color segmentation on mobile robots in the presence of illumination changes. The main contribution of this article is a survey of vision algorithms that are potentially applicable to color-based mobile robot vision. We therefore look at algorithms for color segmentation, color learning and illumination invariance on mobile robot platforms, including approaches that tackle just the underlying vision problems. Furthermore, we investigate how the inter-dependencies between these modules and high-level action planning can be exploited to achieve autonomous learning and adaptation. The goal is to determine the suitability of the state-of-the-art vision algorithms for mobile robot domains, and to identify the challenges that still need to be addressed to enable mobile robots to learn and adapt models for color, so as to operate autonomously in natural conditions.  相似文献   

15.
Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot’s intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.  相似文献   

16.
In this paper a new intelligent robot control scheme is presented which enables a cooperative work of humans and robots through direct contact interaction in a partially known environment. Because of the high flexibility and adaptability, the human–robot cooperation is expected to have a wide range of applications in uncertain environments, not only in future construction and manufacturing industries but also in service branches. A multi-agent control architecture gives an appropriate frame for the flexibility of the human–robot-team. Robots are considered as intelligent autonomous assistants of humans which can mutually interact on a symbolic level and a physical level. This interaction is achieved through the exchange of information between humans and robots, the interpretation of the transmitted information, the coordination of the activities and the cooperation between independent system components. Equipped with sensing modalities for the perception of the environment, the robot system KAMRO (Karlsruhe Autonomous Mobile Robot) is introduced to demonstrate the principles of the cooperation among humans and robot agents. Experiments were conducted to prove the effectiveness of our concept.  相似文献   

17.
Mobile robots can accomplish high-risk tasks without exposing humans to danger: robots go where humans fear to tread. Until the time in which completely autonomous robots are fully deployed, remote operators will be required in order to fulfill desired missions. Remotely controlling a robot requires that the operator receives the information about the robot??s surroundings, as well as its location in the scenario. Based on a set of experiments conducted with users, we evaluate the performance of operators when they are provided with a hand-held-based interface or a desktop-based interface. Results show how performance depends on the task asked of the operator and the scenario in which the robot is moving. The conclusions prove that the operator??s intra-scenario mobility when carrying a hand-held device can counterbalance the limitations of the device. By contrast, the experiments show that if the operator cannot move inside of the scenario, his performance is significantly better when using a desktop-based interface. These results set the basis for a transfer of control policy in missions involving a team of operators, some equipped with hand-held devices and others working remotely with desktop-based computers.  相似文献   

18.
In this paper, we propose fuzzy logic-based cooperative reinforcement learning for sharing knowledge among autonomous robots. The ultimate goal of this paper is to entice bio-insects towards desired goal areas using artificial robots without any human aid. To achieve this goal, we found an interaction mechanism using a specific odor source and performed simulations and experiments [1]. For efficient learning without human aid, we employ cooperative reinforcement learning in multi-agent domain. Additionally, we design a fuzzy logic-based expertise measurement system to enhance the learning ability. This structure enables the artificial robots to share knowledge while evaluating and measuring the performance of each robot. Through numerous experiments, the performance of the proposed learning algorithms is evaluated.  相似文献   

19.
In this paper, we tackle the problem of multimodal learning for autonomous robots. Autonomous robots interacting with humans in an evolving environment need the ability to acquire knowledge from their multiple perceptual channels in an unsupervised way. Most of the approaches in the literature exploit engineered methods to process each perceptual modality. In contrast, robots should be able to acquire their own features from the raw sensors, leveraging the information elicited by interaction with their environment: learning from their sensorimotor experience would result in a more efficient strategy in a life-long perspective. To this end, we propose an architecture based on deep networks, which is used by the humanoid robot iCub to learn a task from multiple perceptual modalities (proprioception, vision, audition). By structuring high-dimensional, multimodal information into a set of distinct sub-manifolds in a fully unsupervised way, it performs a substantial dimensionality reduction by providing both a symbolic representation of data and a fine discrimination between two similar stimuli. Moreover, the proposed network is able to exploit multimodal correlations to improve the representation of each modality alone.  相似文献   

20.
Presents a framework for the operation and coordination of multiple miniature robots. Simple teleoperation can be useful in many situations, but the operator's attention must be completely dedicated to controlling the robot. This may be difficult when the task requires the use of multiple robots. This article introduces a layered system that has been developed to facilitate multimodal control. This system includes user interfaces (UI) for teleoperation clients and robust sensor interpretation algorithms for autonomous control clients. A distributed software control architecture dynamically coordinates hardware resources and shares them between the various clients, allowing for simultaneous control of multiple robots.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号