首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 547 毫秒
1.
2.
3.
In this paper, we address the problem of robot navigation in environments with deformable objects. The aim is to include the costs of object deformations when planning the robot’s motions and trade them off against the travel costs. We present our recently developed robotic system that is able to acquire deformation models of real objects. The robot determines the elasticity parameters by physical interaction with the object and by establishing a relation between the applied forces and the resulting surface deformations. The learned deformation models can then be used to perform physically realistic finite element simulations. This allows the planner to evaluate robot trajectories and to predict the costs of object deformations. Since finite element simulations are time-consuming, we furthermore present an approach to approximate object-specific deformation cost functions by means of Gaussian process regression. We present two real-world applications of our motion planner for a wheeled robot and a manipulation robot. As we demonstrate in real-world experiments, our system is able to estimate appropriate deformation parameters of real objects that can be used to predict future deformations. We show that our deformation cost approximation improves the efficiency of the planner by several orders of magnitude.  相似文献   

4.
ROGUE is an architecture built on a real robot which provides algorithms for the integration of high-level planning, low-level robotic execution, and learning. ROGUE addresses successfully several of the challenges of a dynamic office gopher environment. This article presents the techniques for the integration of planning and execution.ROGUE uses and extends a classical planning algorithm to create plans for multiple interacting goals introduced by asynchronous user requests. ROGUE translates the planner';s actions to robot execution actions and monitors real world execution. ROGUE is currently implemented using the PRODIGY4.0 planner and the Xavier robot. This article describes how plans are created for multiple asynchronous goals, and how task priority and compatibility information are used to achieve appropriate efficient execution. We describe how ROGUE communicates with the planner and the robot to interleave planning with execution so that the planner can replan for failed actions, identify the actual outcome of an action with multiple possible outcomes, and take opportunities from changes in the environment.ROGUE represents a successful integration of a classical artificial intelligence planner with a real mobile robot.  相似文献   

5.
For a long time, robot assembly programming has been produced in two environments: on-line and off-line. On-line robot programming uses the actual robot for the experiments performing a given task; off-line robot programming develops a robot program in either an autonomous system with a high-level task planner and simulation or a 2D graphical user interface linked to other system components. This paper presents a whole hand interface for more easily performing robotic assembly tasks in the virtual tenvironment. The interface is composed of both static hand shapes (states) and continuous hand motions (modes). Hand shapes are recognized as discrete states that trigger the control signals and commands, and hand motions are mapped to the movements of a selected instance in real-time assembly. Hand postures are also used for specifying the alignment constraints and axis mapping of the hand-part coordinates. The basic virtual-hand functions are constructed through the states and modes developing the robotic assembly program. The assembling motion of the object is guided by the user immersed in the environment to a path such that no collisions will occur. The fine motion in controlling the contact and ending position/orientation is handled automatically by the system using prior knowledge of the parts and assembly reasoning. One assembly programming case using this interface is described in detail in the paper.  相似文献   

6.
Performing manipulation tasks interactively in real environments requires a high degree of accuracy and stability. At the same time, when one cannot assume a fully deterministic and static environment, one must endow the robot with the ability to react rapidly to sudden changes in the environment. These considerations make the task of reach and grasp difficult to deal with. We follow a Programming by Demonstration (PbD) approach to the problem and take inspiration from the way humans adapt their reach and grasp motions when perturbed. This is in sharp contrast to previous work in PbD that uses unperturbed motions for training the system and then applies perturbation solely during the testing phase. In this work, we record the kinematics of arm and fingers of human subjects during unperturbed and perturbed reach and grasp motions. In the perturbed demonstrations, the target’s location is changed suddenly after the onset of the motion. Data show a strong coupling between the hand transport and finger motions. We hypothesize that this coupling enables the subject to seamlessly and rapidly adapt the finger motion in coordination with the hand posture. To endow our robot with this competence, we develop a coupled dynamical system based controller, whereby two dynamical systems driving the hand and finger motions are coupled. This offers a compact encoding for reach-to-grasp motions that ensures fast adaptation with zero latency for re-planning. We show in simulation and on the real iCub robot that this coupling ensures smooth and “human-like” motions. We demonstrate the performance of our model under spatial, temporal and grasp type perturbations which show that reaching the target with coordinated hand–arm motion is necessary for the success of the task.  相似文献   

7.
A knowledge-based framework to support task-level programming and operational control of robots is described. Our bask intention is to enhance the intelligence of a robot control system so that it may carefully coordinate the interactions among discrete, asynchronous and concurrent events under the constraints of action precedence and resource allocation. We do this by integrating both off-line and on-line planning capabilities in a single framework. The off-line phase is equipped with proper languages for describing workbenches, specifying tasks, and soliciting knowledge from the user to support the execution of robot tasks. A static planner is included in the phase to conduct static planning, which develops local plans for various specific tasks. The on-line phase is designed as a dynamic control loop for the robot system. It employs a dynamic planner to tackle any contingent situations during the robot operations. It is responsible for developing proper working paths and motion plans to achieve the task goals within designated temporal and resource constraints. It is implemented in a distributed and cooperative blackboard system, which facilitates the integration of various types of knowledge. Finally, any failures from the on-line phase are fed back to the off-line phase. This forms the interaction between the off-line and on-line phases and introduces an extra closed loop opportunistically to tune the dynamic planner to adapt to the variation of the working environment in a long-term manner.  相似文献   

8.
In this paper, we address the problem of humanoid locomotion guided from information of a monocular camera. The goal of the robot is to reach a desired location defined in terms of a target image, i.e., a positioning task. The proposed approach allows us to introduce a desired time to complete the positioning task, which is advantageous in contrast to the classical exponential convergence. In particular, finite-time convergence is achieved while generating smooth robot velocities and considering the omnidirectional waking capability of the robot. In addition, we propose a hierarchical task-based control scheme, which can simultaneously handle the visual positioning and the obstacle avoidance tasks without affecting the desired time of convergence. The controller is able to activate or inactivate the obstacle avoidance task without generating discontinuous velocity references while the humanoid is walking. Stability of the closed loop for the two task-based control is demonstrated theoretically even during the transitions between the tasks. The proposed approach is generic in the sense that different visual control schemes are supported. We evaluate a homography-based visual servoing for position-based and image-based modalities, as well as for eye-in-hand and eye-to-hand configurations. The experimental evaluation is performed with the humanoid robot NAO.  相似文献   

9.
In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human–robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the “relative ease” of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them.  相似文献   

10.
Previously we presented a novel approach to program a robot controller based on system identification and robot training techniques. The proposed method works in two stages: first, the programmer demonstrates the desired behaviour to the robot by driving it manually in the target environment. During this run, the sensory perception and the desired velocity commands of the robot are logged. Having thus obtained training data we model the relationship between sensory readings and the motor commands of the robot using ARMAX/NARMAX models and system identification techniques. These produce linear or non-linear polynomials which can be formally analysed, as well as used in place of “traditional robot” control code.In this paper we focus our attention on how the mathematical analysis of NARMAX models can be used to understand the robot’s control actions, to formulate hypotheses and to improve the robot’s behaviour. One main objective behind this approach is to avoid trial-and-error refinement of robot code. Instead, we seek to obtain a reliable design process, where program design decisions are based on the mathematical analysis of the model describing how the robot interacts with its environment to achieve the desired behaviour. We demonstrate this procedure through the analysis of a particular task in mobile robotics: door traversal.  相似文献   

11.
12.
This research investigates a novel robot-programming approach that applies machine-vision techniques to generate a robot program automatically. The hand motions of a demonstrator are initially recorded as a long sequence of images using two CCD cameras. Machine-vision techniques are then used to recognize the hand motions in three-dimensional space including open, closed, grasp, release and move. The individual hand feature and its corresponding hand position in each sample image is translated to robot's manipulator-level instructions. Finally a robot plays back the task using the automatically generated program.A robot can imitate the hand motions demonstrated by a human master using the proposed machine-vision approach. Compared with the traditional leadthrough and structural programming-language methods, the robot's user will not have to physically move the robot arm through the desired motion sequence and learn complicated robot-programming languages. The approach is currently focused on the classification of hand features and motions of a human arm and, therefore, is restricted to simple pick-and-place applications. Only one arm of the human master can be presented in the image scene, and the master must not wear long-sleeved clothes during demonstration to prevent false identification. Analysis and classification of hand motions in a long sequence of images are time-consuming. The automatic robot programming currently developed is performed off-line.  相似文献   

13.
The goal of robotics research is to design a robot to fulfill a variety of tasks in the real world. Inherent in the real world is a high degree of uncertainty about the robot’s behavior and about the world. We introduce a robot task architecture, DTRC, that generates plans with actions that incorporate costs and uncertain effects, and states that yield rewards.The use of a decision-theoretic planner in a robot task architecture is demonstrated on the mobile robot domain of miniature golf. The miniature golf domain shows the application of decision-theoretic planning in an inherently uncertain domain, and demonstrates that by using decision-theoretic planning as the reasoning method in a robot task architecture, accommodation for uncertain information plays a direct role in the reasoning process.  相似文献   

14.
Traveling recommendation systems have become very popular applications for organizing and planning tourist trips. Among other challenges, these applications are faced with the task of maintaining updated information about popular tourist destinations, as well as providing useful tourist guides that meet the users preferences. In this work we present the PlanTour, a system that creates personalized tourist plans using the human-generated information gathered from the minube1 traveling social network. The system follows an automated planning approach to generate a multiple-day plan with the most relevant points of interest of the city/region being visited. Particularly, the system collects information of users and points of interest from minube, groups these points with clustering techniques to split the problem into per-day sub-problems. Then, it uses an off-the-shelf domain-independent automated planner that finds good quality tourist plans. Unlike other tourist recommender systems, the PlanTour planner is able to organize relevant points of interest taking into account user’s expected drives, and user scores from a real social network. The paper also highlights how to use human provided recommendations to guide the search for solutions of combinatorial tasks. The resulting intelligent system opens new possibilities of combining human-generated knowledge with efficient automated techniques when solving hard computational tasks. From an engineering perspective we advocate for the use of declarative representations of problem solving tasks that have been shown to improve modeling and maintenance of intelligent systems.  相似文献   

15.
In this paper, a new control method for a planar bipedal robot, which we call Graph-based Model Predictive Control, is proposed. This method makes use of a directed graph constructed on the state space of the robot. The vertices of the directed graph are called waypoints, and they serve as intermediate target states to compose complex motions of the robot. By simply tracing the directed edges of the graph, one can achieve Model Predictive Control over the waypoint set. Such a directed graph is pre-designed and stored into the controller’s memory to significantly reduce the computational effort required in real time. In addition, by constructing multiple directed graphs based on different objective functions, one can design multiple motions and switching trajectories among them in a uniform way. The proposed method is applied to variable-speed walking control of a bipedal walker on a two-dimensional plane, and its effectiveness is verified by numerical simulations.  相似文献   

16.
To achieve the ever increasing demand for science return, planetary exploration rovers require more autonomy to successfully perform their missions. Indeed, the communication delays are such that teleoperation is unrealistic. Although the current rovers (such as MER) demonstrate a limited navigation autonomy, and mostly rely on ground mission planning, the next generation (e.g., NASA Mars Science Laboratory and ESA Exomars) will have to regularly achieve long range autonomous navigation tasks. However, fully autonomous long range navigation in partially known planetary‐like terrains is still an open challenge for robotics. Navigating hundreds of meters without any human intervention requires the robot to be able to build adequate representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to control its motions, and to localize itself as it moves. All these activities have to be planned, scheduled, and performed according to the rover context, and controlled so that the mission is correctly fulfilled. To achieve these objectives, we have developed a temporal planner and an execution controller, which exhibit plan repair and replanning capabilities. The planner is in charge of producing plans composed of actions for navigation, science activities (moving and operating instruments), communication with Earth and with an orbiter or a lander, while managing resources (power, memory, etc.) and respecting temporal constraints (communication visibility windows, rendezvous, etc.). High level actions also need to be refined and their execution temporally and logically controlled. Finally, in such critical applications, we believe it is important to deploy a component that protects the system against dangerous or even fatal situations resulting from unexpected interactions between subsystems (e.g., move the robot while the robot arm is unstowed) and/or software components (e.g., take and store a picture in a buffer while the previous one is still being processed). In this article we review the aforementioned capabilities, which have been developed, tested, and evaluated on board our rovers (Lama and Dala). After an overview of the architecture design principle adopted, we summarize the perception, localization, and motion generation functions required by autonomous navigation, and their integration and concurrent operation in a global architecture. We then detail the decisional components: a high level temporal planner that produces the robot activity plan on board, and temporal and procedural execution controllers. We show how some failures or execution delays are being taken care of with online local repair, or replanning. © 2007 Wiley Periodicals, Inc.  相似文献   

17.
Robot navigation in the presence of humans raises new issues for motion planning and control when the humans must be taken explicitly into account. We claim that a human aware motion planner (HAMP) must not only provide safe robot paths, but also synthesize good, socially acceptable and legible paths. This paper focuses on a motion planner that takes explicitly into account its human partners by reasoning about their accessibility, their vision field and their preferences in terms of relative human-robot placement and motions in realistic environments. This planner is part of a human-aware motion and manipulation planning and control system that we aim to develop in order to achieve motion and manipulation tasks in the presence or in synergy with humans.  相似文献   

18.
In this paper, a voice activated robot arm with intelligence is presented. The robot arm is controlled with natural connected speech input. The language input allows a user to interact with the robot in terms which are familiar to most people. The advantages of speech activated robots are hands-free and fast data input operations. The proposed robot is capable of understanding the meaning of natural language commands. After interpreting the voice commands a series of control data for performing a tasks are generated. Finally the robot actually performs the task. Artificial Intelligence techniques are used to make the robot understand voice commands and act in the desired mode. It is also possible to control the robot using the keyboard input mode.  相似文献   

19.
A robot model incorporates possible discontinuous nonlinearities with unknown forms and values, unknown payload and unknown predictable external disturbance variations, all in known bounds. A control algorithm is synthesized to guarantee the following: 1.Robust global both stability and attraction with finite reachability time of an appropriately chosen sliding set. 2.The robot motions reach, on the sliding set, a desired motion in a prespecified finite time. 3. Robust both stability and global attraction with finite reachability time of the given robot desired motion. 4. A prespecified convergence quality of real motions to the desired motion, independently of the internal dynamics of the system and without oscillations, hence without chattering in the sliding mode. Robot control robustness means that the controller realizes the control without using information about the real robot internal dynamics. All this is achieved by using the Lyapunov method in a new way combined with a sliding mode approach, but without a variation of the controller structure. The theoretical results are applied to a rotational 3‐degree‐of‐freedom robot. The simulations well verify the robustness of the control algorithm and high quality of robot motions with a prespecified reachability time. ©1999 John Wiley & Sons, Inc.  相似文献   

20.
This paper presents a task planner based on decision trees. Two different types of cooperative tasks are described: common task and parallel task. In the first type of task two or more robots are required to accomplish the task. In the second type, several tasks can be performed in parallel by different robots to reduce the total disassembly time. The planner presented is based on a hierarchical representation of the product and performs the distribution of the tasks among robots using decision trees. The system takes into consideration the work area of each robot and its own characteristics. The work cell can be composed of j robotic manipulators. Finally, a practical application of a PC disassembly system is shown.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号