首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
2.
In this paper we deal with a remote meeting system with tangible interface, in which a robot is used as tangible avatar instead of a remote meeting partner. For realizing such system, it is a critical issue how the robot imitates human motions with natural and exact. So, we suggested a new method that human arm motion is captured with a stereo vision system and transferred to the robotic avatar with real-time. For capturing 3D arm motions based on markerless method, we proposed a new metaball-based method which was designed in order to have some robust and efficient properties: a modified iso-surface equation of metaball for overcoming local minima and a downsizing method of 3D point cloud for improving time complexity. With our meeting system, we have implemented our new algorithm and run at approximately 12–16 Hz. Also, its accuracy in motion capturing could be acceptable for robot motion generation.  相似文献   

3.
This research investigates a novel robot-programming approach that applies machine-vision techniques to generate a robot program automatically. The hand motions of a demonstrator are initially recorded as a long sequence of images using two CCD cameras. Machine-vision techniques are then used to recognize the hand motions in three-dimensional space including open, closed, grasp, release and move. The individual hand feature and its corresponding hand position in each sample image is translated to robot's manipulator-level instructions. Finally a robot plays back the task using the automatically generated program.A robot can imitate the hand motions demonstrated by a human master using the proposed machine-vision approach. Compared with the traditional leadthrough and structural programming-language methods, the robot's user will not have to physically move the robot arm through the desired motion sequence and learn complicated robot-programming languages. The approach is currently focused on the classification of hand features and motions of a human arm and, therefore, is restricted to simple pick-and-place applications. Only one arm of the human master can be presented in the image scene, and the master must not wear long-sleeved clothes during demonstration to prevent false identification. Analysis and classification of hand motions in a long sequence of images are time-consuming. The automatic robot programming currently developed is performed off-line.  相似文献   

4.
This paper introduces a novel neuro-dynamical model that accounts for possible mechanisms of action imitation and learning. It is considered that imitation learning requires at least two classes of generalization. One is generalization over sensory–motor trajectory variances, and the other class is on cognitive level which concerns on more qualitative understanding of compositional actions by own and others which do not necessarily depend on exact trajectories. This paper describes a possible model dealing with these classes of generalization by focusing on the problem of action compositionality. The model was evaluated in the experiments using a small humanoid robot. The robot was trained with a set of different actions concerning object manipulations which can be decomposed into sequences of action primitives. Then the robot was asked to imitate a novel compositional action demonstrated by a human subject which are composed from prior-learned action primitives. The results showed that the novel action can be successfully imitated by decomposing and composing it with the primitives by means of organizing unified intentional representation hosted by mirror neurons even though the trajectory-level appearance is different between the ones of observed and those of self-generated.  相似文献   

5.
Imitation has been receiving increasing attention from the viewpoint of not simply generating new motions but also the emergence of communication. This paper proposes a system for a humanoid who obtains new motions through learning the interaction rules with a human partner based on the assumption of the mirror system. First, a humanoid learns the correspondence between its own posture and the partner’s one on the ISOMAPs supposing that a human partner imitates the robot motions. Based on this correspondence, the robot can easily transfer the observed partner’s gestures to its own motion. Then, this correspondence enables a robot to acquire the new motion primitive for the interaction. Furthermore, through this process, the humanoid learns an interaction rule that control gesture turn-taking. The preliminary results and future issues are given.  相似文献   

6.
机器人操作技能模型综述   总被引:8,自引:3,他引:5  
秦方博  徐德 《自动化学报》2019,45(8):1401-1418
机器人技能学习是人工智能与机器人学的交叉领域,目的是使机器人通过与环境和用户的交互得到经验数据,基于示教学习或强化学习,从经验数据中自主获取和优化技能,并应用于以后的相关任务中.技能学习使机器人的任务部署更加灵活快捷和用户友好,而且可以让机器人具有自我优化的能力.技能模型是技能学习的基础和前提,决定了技能效果的上限.日益复杂和多样的机器人操作任务,对技能操作模型的设计实现带来了很多挑战.本文给出了技能操作模型的概念与性质,阐述了流程、运动、策略和效果预测四种技能表达模式,并对其典型应用和未来趋势做出了概括.  相似文献   

7.
Book reviews     
《Ergonomics》2012,55(5):531-542
Abstract

Industrial robots often operate at high speed, with unpredictable motion patterns and erratic idle times. Serious injuries and deaths have occurred due to operator misperception of these robot design and performance characteristics. The main objective of the research project was to study human perceptual aspects of hazardous robotics workstations. Two laboratory experiments were designed to investigate workers' perceptions of two industrial robots with different physical configurations and performance capabilities. Twenty-four subjects participated in the study. All subjects were chosen from local industries, and had had considerable exposure to robots and other automated equipment in their working experience. Experiment 1 investigated the maximum speed of robot arm motions that workers, who were experienced with operation of industrial robots, judged to be ‘safe’ for monitoring tasks. It was found that the selection of safe speed depends on the size of the robot and the speed with which the robot begins its operation. Speeds of less than 51 cm/s and 63cm/s for large and small robots, respectively, were perceived as safe, i.e., ones that did not result in workers feeling uneasy or endangered when working in close proximity to the robot and monitoring its actions. Experiment 2 investigated the minimum value of robot idle time (inactivity) perceived by industrial workers as system malfunction, and an indication of the ‘safe-to-approach’ condition. It was found that idle times of 41 s and 28 s or less for the small and large robots, respectively, were perceived by workers to be a result of system malfunction. About 20% of the workers waited only 10 s or less before deciding that the robot had stopped because of system malfunction. The idle times were affected by the subjects' prior exposure to a simulated robot accident. Further interpretations of the results and suggestions for operational limitations of robot systems are discussed.  相似文献   

8.
In avatar-mediated telepresence systems, a similar environment is assumed for involved spaces, so that the avatar in a remote space can imitate the user's motion with proper semantic intention performed in a local space. For example, touching on the desk by the user should be reproduced by the avatar in the remote space to correctly convey the intended meaning. It is unlikely, however, that the two involved physical spaces are exactly the same in terms of the size of the room or the locations of the placed objects. Therefore, a naive mapping of the user's joint motion to the avatar will not create the semantically correct motion of the avatar in relation to the remote environment. Existing studies have addressed the problem of retargeting human motions to an avatar for telepresence applications. Few studies, however, have focused on retargeting continuous full-body motions such as locomotion and object interaction motions in a unified manner. In this paper, we propose a novel motion adaptation method that allows to generate the full-body motions of a human-like avatar on-the-fly in the remote space. The proposed method handles locomotion and object interaction motions as well as smooth transitions between them according to given user actions under the condition of a bijective environment mapping between morphologically-similar spaces. Our experiments show the effectiveness of the proposed method in generating plausible and semantically correct full-body motions of an avatar in room-scale space.  相似文献   

9.
We formulate the kinematic equations of motion of wheeled mobile robots incorporating conventional, omnidirectional, and ball wheels.1 We extend the kinematic modeling of stationary manipulators to accommodate such special characteristics of wheeled mobile robots as multiple closed-link chains, higher-pair contact points between a wheel and a surface, and unactuated and unsensed wheel degrees of freedom. We apply the Sheth-Uicker convention to assign coordinate axes and develop a matrix coordinate transformation algebra to derive the equations of motion. We introduce a wheel Jacobian matrix to relate the motions of each wheel to the motions of the robot. We then combine the individual wheel equations to obtain the composite robot equation of motion. We interpret the properties of the composite robot equation to characterize the mobility of a wheeled mobile robot according to a mobility characterization tree. Similarly, we apply actuation and sensing characterization trees to delineate the robot motions producible by the wheel actuators and discernible by the wheel sensors, respectively. We calculate the sensed forward and actuated inverse solutions and interpret the physical conditions which guarantee their existence. To illustrate the development, we formulate and interpret the kinematic equations of motion of Uranus, a wheeled mobile robot being constructed in the CMU Mobile Robot Laboratory.  相似文献   

10.
《Advanced Robotics》2013,27(15):1687-1707
In robotics, recognition of human activity has been used extensively for robot task learning through imitation and demonstration. However, there has not been much work performed on modeling and recognition of activities that involve object manipulation and grasping. In this work, we deal with single arm/hand actions which are very similar to each other in terms of arm/hand motions. The approach is based on the hypothesis that actions can be represented as sequences of motion primitives. Given this, a set of five different manipulation actions of different levels of complexity are investigated. To model the process, we use a combination of discriminative support vector machines and generative hidden Markov models. The experimental evaluation, performed with 10 people, investigates both the definition and structure of primitive motions, as well as the validity of the modeling approach taken.  相似文献   

11.
The recent increase in technological maturity has empowered robots to assist humans and provide daily services. Voice command usually appears as a popular human–machine interface for communication. Unfortunately, deaf people cannot exchange information from robots through vocal modalities. To interact with deaf people effectively and intuitively, it is desired that robots, especially humanoids, have manual communication skills, such as performing sign languages. Without ad hoc programming to generate a particular sign language motion, we present an imitation system to teach the humanoid robot performing sign languages by directly replicating observed demonstration. The system symbolically encodes the information of human hand–arm motion from low-cost depth sensors as a skeleton motion time-series that serves to generate initial robot movement by means of perception-to-action mapping. To tackle the body correspondence problem, the virtual impedance control approach is adopted to smoothly follow the initial movement, while preventing potential risks due to the difference in the physical properties between the human and the robot, such as joint limit and self-collision. In addition, the integration of the leg-joints stabilizer provides better balance of the whole robot. Finally, our developed humanoid robot, NINO, successfully learned by imitation from human demonstration to introduce itself using Taiwanese Sign Language.  相似文献   

12.
Humanoid robots needs to have human-like motions and appearance in order to be well-accepted by humans. Mimicking is a fast and user-friendly way to teach them human-like motions. However, direct assignment of observed human motions to robot’s joints is not possible due to their physical differences. This paper presents a real-time inverse kinematics based human mimicking system to map human upper limbs motions to robot’s joints safely and smoothly. It considers both main definitions of motion similarity, between end-effector motions and between angular configurations. Microsoft Kinect sensor is used for natural perceiving of human motions. Additional constraints are proposed and solved in the projected null space of the Jacobian matrix. They consider not only the workspace and the valid motion ranges of the robot’s joints to avoid self-collisions, but also the similarity between the end-effector motions and the angular configurations to bring highly human-like motions to the robot. Performance of the proposed human mimicking system is quantitatively and qualitatively assessed and compared with the state-of-the-art methods in a human-robot interaction task using Nao humanoid robot. The results confirm applicability and ability of the proposed human mimicking system to properly mimic various human motions.  相似文献   

13.
《Advanced Robotics》2013,27(6):621-636
This paper proposes a decentralized position/internal force hybrid control approach for multiple robot manipulators to cooperatively manipulate an unknown dynamic object. In this approach, each autonomous robot has its own controller and uses its own sensor information in performing the fast cooperation. This approach eliminates a lot of information communications between each robot and reduces numerous computations. The influences of the position and the internal force estimation errors to the overall control system is analyzed. A cooperative identification method for each autonomous robot to identify the object's complex dynamics, cooperatively, is presented. In addition, the trade-off between the unilateral force constraint and the robots' position response is studied. Experiments show the effectiveness of this control approach.  相似文献   

14.
Human–Robot Collaboration (HRC) is a term used to describe tasks in which robots and humans work together to achieve a goal. Unlike traditional industrial robots, collaborative robots need to be adaptive; able to alter their approach to better suit the situation and the needs of the human partner. As traditional programming techniques can struggle with the complexity required, an emerging approach is to learn a skill by observing human demonstration and imitating the motions; commonly known as Learning from Demonstration (LfD). In this work, we present a LfD methodology that combines an ensemble machine learning algorithm (i.e. Random Forest (RF)) with stochastic regression, using haptic information captured from human demonstration. The capabilities of the proposed method are evaluated using two collaborative tasks; co-manipulation of an object (where the human provides the guidance but the robot handles the objects weight) and collaborative assembly of simple interlocking parts. The proposed method is shown to be capable of imitation learning; interpreting human actions and producing equivalent robot motion across a diverse range of initial and final conditions. After verifying that ensemble machine learning can be utilised for real robotics problems, we propose a further extension utilising Weighted Random Forest (WRF) that attaches weights to each tree based on its performance. It is then shown that the WRF approach outperforms RF in HRC tasks.  相似文献   

15.
W Karwowski  M Rahimi 《Ergonomics》1991,34(5):531-546
Industrial robots often operate at high speed, with unpredictable motion patterns and erratic idle times. Serious injuries and deaths have occurred due to operator misperception of these robot design and performance characteristics. The main objective of the research project was to study human perceptual aspects of hazardous robotics workstations. Two laboratory experiments were designed to investigate workers' perceptions of two industrial robots with different physical configurations and performance capabilities. Twenty-four subjects participated in the study. All subjects were chosen from local industries, and had had considerable exposure to robots and other automated equipment in their working experience. Experiment 1 investigated the maximum speed of robot arm motions that workers, who were experienced with operation of industrial robots, judged to be 'safe' for monitoring tasks. It was found that the selection of safe speed depends on the size of the robot and the speed with which the robot begins its operation. Speeds of less than 51 cm/s and 63 cm/s for large and small robots, respectively, were perceived as safe, i.e., ones that did not result in workers feeling uneasy or endangered when working in close proximity to the robot and monitoring its actions. Experiment 2 investigated the minimum value of robot idle time (inactivity) perceived by industrial workers as system malfunction, and an indication of the 'safe-to-approach' condition. It was found that idle times of 41 s and 28 s or less for the small and large robots, respectively, were perceived by workers to be a result of system malfunction. About 20% of the workers waited only 10 s or less before deciding that the robot had stopped because of system malfunction. The idle times were affected by the subjects' prior exposure to a simulated robot accident. Further interpretations of the results and suggestions for operational limitations of robot systems are discussed.  相似文献   

16.
In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human–robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the “relative ease” of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them.  相似文献   

17.
Robot navigation in the presence of humans raises new issues for motion planning and control when the humans must be taken explicitly into account. We claim that a human aware motion planner (HAMP) must not only provide safe robot paths, but also synthesize good, socially acceptable and legible paths. This paper focuses on a motion planner that takes explicitly into account its human partners by reasoning about their accessibility, their vision field and their preferences in terms of relative human-robot placement and motions in realistic environments. This planner is part of a human-aware motion and manipulation planning and control system that we aim to develop in order to achieve motion and manipulation tasks in the presence or in synergy with humans.  相似文献   

18.
《Advanced Robotics》2013,27(13):1583-1600
It is a formidable issue how robots can show behaviors to be considered as the corresponding human's ones since the body structure is different between robots and humans. As a simple case for such a correspondence problem, this paper presents a robot that learns to vocalize vowels through the interaction with its caregiver. Inspired by the findings of developmental psychology, we focus on the roles of maternal imitation (i.e., imitation of the robot voices by the caregiver) since it could play a role to instruct the correspondence of the sounds. Furthermore, we suppose that it causes unconscious anchoring in which the imitated voice by the caregiver is performed unconsciously, closely to one of his/her own vowels and hereby helps to make the robot's utterances to be more vowel-like. We propose a method for vowel learning with an imitative caregiver, under the assumptions that the robot knows the desired categories of the caregiver's vowels, and the rough estimate of mapping between the region of sounds that the caregiver can generate and the region that the robot can. Through experiments with a Japanese imitative caregiver, we show that a robot succeeds in acquiring more vowel-like utterances than a robot without such a caregiver, even when it is given different types of mapping functions.  相似文献   

19.
Using the probabilistic methods outlined in this paper, a robot can learn to recognize its own motor-controlled body parts, or their mirror reflections, without prior knowledge of their appearance. For each item in its visual field, the robot calculates the likelihoods of each of three dynamic Bayesian models, corresponding to the categories of “self”, “animate other”, or “inanimate”. Each model fully incorporates the object’s entire motion history and the robot’s whole motor history in constant update time, via the forward algorithm. The parameters for each model are learned in an unsupervised fashion as the robot experiments with its arm over a period of four minutes. The robot demonstrated robust recognition of its mirror image, while classifying the nearby experimenter as “animate other”, across 20 experiments. Adversarial experiments, in which a subject mirrored the robot’s motion showed that as long as the robot had seen the subject move for as little as 5 s before mirroring, the evidence was “remembered” across a full minute of mimicry.  相似文献   

20.
An imitation of human motion has been used as a promising technique for the development of a robot. Some techniques such as motion capture systems and data-gloves are used for analyzing human motion. However, since those methods involve (a) environmental restrictions such as the preparation of two or more cameras and the strict control of brightness, and (b) physical restrictions such as the wearing of markers and/or data-gloves, they are far removed from a method for recognizing human motion in a natural condition. In this article, we propose a method that makes a 3-dimensional CG (3DCG) by transforming a feature vector of human posture on a thermal image into a 3DCG model. The 3DCG models for use as training data are made with manual model fitting. Then human models synthesized by our method are geometrically evaluated in CG space. The average error in position is about 10 cm. Such a relatively small error might be acceptable in some cases e.g., 3DCG animation generation and the imitation of human motion by a robot. Our method has neither physical nor environmental restrictions. The rotation-angles at each joint obtained by our method can be used for an imitation of human posture by a robot.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号