首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
《Advanced Robotics》2013,27(12):1351-1367
Robot imitation is a useful and promising alternative to robot programming. Robot imitation involves two crucial issues. The first is how a robot can imitate a human whose physical structure and properties differ greatly from its own. The second is how the robot can generate various motions from finite programmable patterns (generalization). This paper describes a novel approach to robot imitation based on its own physical experiences. We considered the target task of moving an object on a table. For imitation, we focused on an active sensing process in which the robot acquires the relation between the object's motion and its own arm motion. For generalization, we applied the RNNPB (recurrent neural network with parametric bias) model to enable recognition/generation of imitation motions. The robot associates the arm motion which reproduces the observed object's motion presented by a human operator. Experimental results proved the generalization capability of our method, which enables the robot to imitate not only motion it has experienced, but also unknown motion through nonlinear combination of the experienced motions.  相似文献   

2.
Imitation has been receiving increasing attention from the viewpoint of not simply generating new motions but also the emergence of communication. This paper proposes a system for a humanoid who obtains new motions through learning the interaction rules with a human partner based on the assumption of the mirror system. First, a humanoid learns the correspondence between its own posture and the partner’s one on the ISOMAPs supposing that a human partner imitates the robot motions. Based on this correspondence, the robot can easily transfer the observed partner’s gestures to its own motion. Then, this correspondence enables a robot to acquire the new motion primitive for the interaction. Furthermore, through this process, the humanoid learns an interaction rule that control gesture turn-taking. The preliminary results and future issues are given.  相似文献   

3.
In this paper we propose a novel approach for intuitive and natural physical human–robot interaction in cooperative tasks. Through initial learning by demonstration, robot behavior naturally evolves into a cooperative task, where the human co-worker is allowed to modify both the spatial course of motion as well as the speed of execution at any stage. The main feature of the proposed adaptation scheme is that the robot adjusts its stiffness in path operational space, defined with a Frenet–Serret frame. Furthermore, the required dynamic capabilities of the robot are obtained by decoupling the robot dynamics in operational space, which is attached to the desired trajectory. Speed-scaled dynamic motion primitives are applied for the underlying task representation. The combination allows a human co-worker in a cooperative task to be less precise in parts of the task that require high precision, as the precision aspect is learned and provided by the robot. The user can also freely change the speed and/or the trajectory by simply applying force to the robot. The proposed scheme was experimentally validated on three illustrative tasks. The first task demonstrates novel two-stage learning by demonstration, where the spatial part of the trajectory is demonstrated independently from the velocity part. The second task shows how parts of the trajectory can be rapidly and significantly changed in one execution. The final experiment shows two Kuka LWR-4 robots in a bi-manual setting cooperating with a human while carrying an object.  相似文献   

4.
This paper describes an approach to estimating the progress in a task executed by a humanoid robot and to synthesizing motion based on the current progress so that the robot can achieve the task. The robot observes a human performing whole body motion for a specific task, and encodes these motions into a hidden Markov model (HMM). The current observation is compared with the motion generated by the HMM, and the task progress can be estimated during the robot performing the motion. The robot subsequently uses the estimate of the task progress to generate a motion appropriate to the current situation with the feedback rule. We constructed a bilateral remote control system with humanoid robot HRP-4 and haptic device Novint Falcon, and we made the humanoid robot push a button. Ten trial motions of pushing a button were recorded for the training data. We tested our proposed approach on the autonomous execution of the pushing motion by the humanoid robot, and confirmed the effectiveness of our task progress feedback method.  相似文献   

5.
Controlling someone’s attention can be defined as shifting his/her attention from the existing direction to another. To shift someone’s attention, gaining attention and meeting gaze are two most important prerequisites. If a robot would like to communicate a particular person, it should turn its gaze to him/her for eye contact. However, it is not an easy task for the robot to make eye contact because such a turning action alone may not be effective in all situations, especially when the robot and the human are not facing each other or the human is intensely attending to his/her task. Therefore, the robot should perform some actions so that it can attract the target person and make him/her respond to the robot to meet gaze. In this paper, we present a robot that can attract a target person’s attention by moving its head, make eye contact through showing gaze awareness by blinking its eyes, and directs his/her attention by repeating its eyes and head turns from the person to the target object. Experiments using 20 human participants confirm the effectiveness of the robot actions to control human attention.  相似文献   

6.
7.
Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot’s intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.  相似文献   

8.
Our goal is to enable robots to produce motion that is suitable for human–robot collaboration and co-existence. Most motion in robotics is purely functional, ideal when the robot is performing a task in isolation. In collaboration, however, the robot’s motion has an observer, watching and interpreting the motion. In this work, we move beyond functional motion, and introduce the notion of an observer into motion planning, so that robots can generate motion that is mindful of how it will be interpreted by a human collaborator. We formalize predictability and legibility as properties of motion that naturally arise from the inferences in opposing directions that the observer makes, drawing on action interpretation theory in psychology. We propose models for these inferences based on the principle of rational action, and derive constrained functional trajectory optimization techniques for planning motion that is predictable or legible. Finally, we present experiments that test our work on novice users, and discuss the remaining challenges in enabling robots to generate such motion online in complex situations.  相似文献   

9.
We present a novel method for a robot to interactively learn, while executing, a joint human–robot task. We consider collaborative tasks realized by a team of a human operator and a robot helper that adapts to the human’s task execution preferences. Different human operators can have different abilities, experiences, and personal preferences so that a particular allocation of activities in the team is preferred over another. Our main goal is to have the robot learn the task and the preferences of the user to provide a more efficient and acceptable joint task execution. We cast concurrent multi-agent collaboration as a semi-Markov decision process and show how to model the team behavior and learn the expected robot behavior. We further propose an interactive learning framework and we evaluate it both in simulation and on a real robotic setup to show the system can effectively learn and adapt to human expectations.  相似文献   

10.
Previously we presented a novel approach to program a robot controller based on system identification and robot training techniques. The proposed method works in two stages: first, the programmer demonstrates the desired behaviour to the robot by driving it manually in the target environment. During this run, the sensory perception and the desired velocity commands of the robot are logged. Having thus obtained training data we model the relationship between sensory readings and the motor commands of the robot using ARMAX/NARMAX models and system identification techniques. These produce linear or non-linear polynomials which can be formally analysed, as well as used in place of “traditional robot” control code.In this paper we focus our attention on how the mathematical analysis of NARMAX models can be used to understand the robot’s control actions, to formulate hypotheses and to improve the robot’s behaviour. One main objective behind this approach is to avoid trial-and-error refinement of robot code. Instead, we seek to obtain a reliable design process, where program design decisions are based on the mathematical analysis of the model describing how the robot interacts with its environment to achieve the desired behaviour. We demonstrate this procedure through the analysis of a particular task in mobile robotics: door traversal.  相似文献   

11.
A key challenge in human-robot shared workspace is defining the decision criterion to select the next task for a fluent, efficient and safe collaboration. While working with robots in an industrial environment, tasks may comply with precedence constraints to be executed. A typical example of precedence constraint in industry occurs at an assembly station when the human cannot perform a task before the robot ends on its own. This paper presents a methodology based on the Maximum Entropy Inverse Optimal Control for the identification of a probability distribution over the human goals, packed into a software tool for human-robot shared-workspace collaboration. The software analyzes the human goal and the goal precedence constraints, and it is able to identify the best robot goal along with the relative motion plan. The approach used is, an algorithm for the management of goal precedence constraints and the Partially Observable Markov Decision Process (POMDP) for the selection of the next robot action. A comparison study with 15 participants was carried out in a real world assembly station. The experiment focused on evaluating the task fluency, the task efficiency and the human satisfaction. The presented model displayed reduction in robot idle time and increased human satisfaction.  相似文献   

12.
This article considers the suitability of current robots designed to assist humans in accomplishing their daily domestic tasks. With several million units sold worldwide, robotic vacuum cleaners are currently the figurehead in this field. As such, we will use them to investigate the following key question: How does a service cleaning robot perform in a real household? One must consider not just how well a robot accomplishes its task, but also how well it integrates inside the user’s space and perception. We took a holistic approach to addressing these topics by combining two studies in order to build a common ground. In the first of these studies, we analyzed a sample of seven robots to identify the influence of key technologies, such as the navigation system, on technical performance. In the second study, we conducted an ethnographic study within nine households to identify users’ needs. This innovative approach enables us to recommend a number of concrete improvements aimed at fulfilling users’ needs by leveraging current technologies to reach new possibilities.  相似文献   

13.
ABSTRACT

This paper proposes a novel, simultaneous bipedal locomotion method using haptics for remote operation of biped robots. In general, traditional biped walking methods require very high computational power and advanced controllers to perform the required task. However, in this proposed method, a master exoskeleton attached to the human’s lower body is used to obtain the trajectory and haptic information to generate the trajectory of the slave biped robot in real time. Lateral motion of the center of mass of the biped is constrained in this experiment. Also, it is considered that no communication delay is presented in between the two systems in this experiment, and they are not discussed in this paper. Since a direct motion transmission is used in the proposed method, this method is quite straight forward and a simultaneous walking can be realized at the same time with high performance. Also, it does not require an exact dynamic model of the biped or specific method to plan the trajectory. The gait pattern of the biped is directly determined by that of the human. Also, the operator can feel the remote environment through the exoskeleton robot. Results obtained from the experiments validate the proposed method.  相似文献   

14.
The lack of a theory-based design methodology for mobile robot control programs means that control programs have to be developed through an empirical trial-and-error process. This can be costly, time consuming and error prone.In this paper we show how to develop a theory of robot–environment interaction, which would overcome the above problem. We show how we can model a mobile robot’s task (so-called “task identification”) using non-linear polynomial models (NARMAX), which can subsequently be formally analysed using established mathematical methods. This provides an understanding of the underlying phenomena governing the robot’s behaviour.Apart from the paper’s main objective of formally analysing robot–environment interaction, the task identification process has further benefits, such as the fast and convenient cross-platform transfer of robot control programs (“Robot Java”), parsimonious task representations (memory issues) and very fast control code execution times.  相似文献   

15.
Within mobile robotics, one of the most dominant relationships to consider when implementing robot control code is the one between the robot’s sensors and its motors. When implementing such a relationship, efficiency and reliability are of crucial importance. The latter aspects often prove challenging due to the complex interaction between a robot and the environment in which it exists, frequently resulting in a time consuming iterative process where control code is redeveloped and tested many times before obtaining an optimal controller. In this paper, we address this challenge by implementing an alternative approach to control code generation, which first identifies the desired robot behaviour and represents the sensor-motor task algorithmically through system identification using the NARMAX modelling methodology. The control code is generated by task demonstration, where the sensory perception and velocities are logged and the relationship that exists between them is then modelled using system identification. This approach produces transparent control code through non-linear polynomial equations that can be mathematically analysed to obtain formal statements regarding specific inputs/outputs. We demonstrate this approach to control code generation and analyse its performance in dynamic environments.  相似文献   

16.
This paper attempts to discover the invariant features in a whole-body dynamic task under perturbations. Our hypothesis is that the features are useful both for execution and recognition of a task, and have their origin in human embodiment.

For the sake of concreteness, we focus on a particular task named “Roll-and-Rise” motion, and carried out a multi-approach investigation. First, an analysis of motion capture data of human performance is presented to show its invariant features. Next, we show that such invariants emerge from the underlying physics of the task, using simulation data. These invariants are actually useful for generating robot motion, which has been successfully realized with an adult-size real humanoid robot. The experimental data are analyzed to confirm the temporal localization of invariant features. Lastly, we present a psychological experiment which confirms that these timings are actually important points where human observers extract crucial information about the task.  相似文献   


17.
An approach to the task of Programming by demonstration (PbD) of grasping skills is introduced, where a mobile service robot is taught by a human instructor how to grasp a specific object. In contrast to other approaches the instructor demonstrates the grasping action several times to the robot to increase reconstruction performance. Only the robot’s stereoscopic vision system is used to track the instructor’s hand. The developed tracking algorithm is designed to not need artificial markers, data gloves or being restricted to fixed or difficult to calibrate sensor installations while at the same time being real-time capable on a mobile service robot with limited resources. Due to the instructor’s repeated demonstrations and his low repeating accuracy, every time a grasp is demonstrated the instructor performs it differently. To compensate for these variations and also to compensate for tracking errors, the use of a Self-Organizing-Map (SOM) with a one-dimensional topology is proposed. This SOM is used to generalize over differently demonstrated grasping actions and to reconstruct the intended approach trajectory of the instructor’s hand while grasping an object. The approach is implemented and evaluated on the service robot TASER using synthetically generated data as well as real world data.  相似文献   

18.
19.
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator.  相似文献   

20.
A major goal of robotics research is to develop techniques that allow non-experts to teach robots dexterous skills. In this paper, we report our progress on the development of a framework which exploits human sensorimotor learning capability to address this aim. The idea is to place the human operator in the robot control loop where he/she can intuitively control the robot, and by practice, learn to perform the target task with the robot. Subsequently, by analyzing the robot control obtained by the human, it is possible to design a controller that allows the robot to autonomously perform the task. First, we introduce this framework with the ball-swapping task where a robot hand has to swap the position of the balls without dropping them, and present new analyses investigating the intrinsic dimension of the ball-swapping skill obtained through this framework. Then, we present new experiments toward obtaining an autonomous grasp controller on an anthropomorphic robot. In the experiments, the operator directly controls the (simulated) robot using visual feedback to achieve robust grasping with the robot. The data collected is then analyzed for inferring the grasping strategy discovered by the human operator. Finally, a method to generalize grasping actions using the collected data is presented, which allows the robot to autonomously generate grasping actions for different orientations of the target object.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号