首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
ABSTRACT

The present paper introduces a near-future perception system called Previewed Reality. In a co-existence environment of a human and a robot, unexpected collisions between the human and the robot must be avoided to the extent possible. In many cases, the robot is controlled carefully so as not to collide with a human. However, it is almost impossible to perfectly predict human behavior in advance. On the other hand, if a user can determine the motion of a robot in advance, he/she can avoid a hazardous situation and exist safely with the robot. In order to ensure that a user perceives future events naturally, we developed a near-future perception system named Previewed Reality. Previewed Reality consists of an informationally structured environment, a VR display or an AR display, and a dynamics simulator. A number of sensors are embedded in an informationally structured environment, and information such as the position of furniture, objects, humans, and robots, is sensed and stored structurally in a database. Therefore, we can forecast possible subsequent events using a robot motion planner and a dynamics simulator and can synthesize virtual images from the viewpoint of the user, which will actually occur in the near future. The viewpoint of the user, which is the position and orientation of a VR display or an AR display, is also tracked by an optical tracking system in the informationally structured environment, or the SLAM technique on an AR display. The synthesized images are presented to the user by overlaying these images on a real scene using the VR display or the AR display. This system provides human-friendly communication between a human and a robotic system, and a human and a robot can coexist safely by intuitively showing the human possible hazardous situations in advance.  相似文献   

2.
Abstract— The Multi‐User 3‐D Television Display (MUTED), designed to provide three‐dimensional television (3‐D TV) by the display of autostereoscopic imagery to multiple viewers, each of whom should enjoy freedom of movement, is described. Such an autostereoscopic display system, which allows multiple viewers simultaneously by the use of head tracking, was previously demonstrated for TV applications in the ATTEST project. However, the requirement for a dynamically addressable, steerable backlight presented several problems for the illumination source. The MUTED system demonstrates significant advances in the realization of a multi‐user autostereoscopic display, partly due to the provision of a dynamic backlight employing a novel holographic laser projector. Such a technology provides significant advantages in terms of brightness, efficiency, laser speckle, and the ability to correct for optical aberrations compared to both imaging and scanned‐beam projection technologies.  相似文献   

3.
S. Hoshino  K. Maki 《Advanced Robotics》2013,27(17):1095-1109
In order for robots to exist together with humans, safety for the humans has to be strictly ensured. On the other hand, safety might decrease working efficiency of robots. Namely, this is a trade-off problem between human safety and robot efficiency in a field of human–robot interaction. For this problem, we propose a novel motion planning technique of multiple mobile robots. Two artificial potentials are presented for generating repulsive force. The first potential is provided for humans. The von Mises distribution is used to consider the behavioral property of humans. The second potential is provided for the robots. The Kernel density estimation is used to consider the global robot congestion. Through simulation experiments, the effectiveness of the behavior and congestion potentials of the motion planning technique for human safety and robot efficiency is discussed. Moreover, a sensing system for humans in a real environment is developed. From experimental results, the significance of the behavior potential based on the actual humans is discussed. For the coexistence of humans and robots, it is important to evaluate a mutual influence between them. For this purpose, a virtual space is built using projection mapping. Finally, the effectiveness of the motion planning technique for the human–robot interaction is discussed from the point of view of not only robots but also humans.  相似文献   

4.
Head gaze, or the orientation of the head, is a very important attentional cue in face to face conversation. Some subtleties of the gaze can be lost in common teleconferencing systems, because a single perspective warps spatial characteristics. A recent random hole display is a potentially interesting display for group conversation, as it allows multiple stereo viewers in arbitrary locations, without the restriction of conventional autostereoscopic displays on viewing positions. We represented a remote person as an avatar on a random hole display. We evaluated this system by measuring the ability of multiple observers with different horizontal and vertical viewing angles to accurately and simultaneously judge which targets the avatar is gazing at. We compared three perspective conditions: a conventional 2D view, a monoscopic perspective-correct view, and a stereoscopic perspective-correct views. In the latter two conditions, the random hole display shows three and six views simultaneously. Although the random hole display does not provide high quality view, because it has to distribute display pixels among multiple viewers, the different views are easily distinguished. Results suggest the combined presence of perspective-correct and stereoscopic cues significantly improved the effectiveness with which observers were able to assess the avatar׳s head gaze direction. This motivates the need for stereo in future multiview displays.  相似文献   

5.
A solid-state dynamic parallax barrier autostereoscopic display mitigates some of the restrictions present in static barrier systems, such as fixed view-distance range, slow response to head movements, and fixed stereo operating mode. By dynamically varying barrier parameters in real time, viewers may move closer to the display and move faster laterally than with a static barrier system, and the display can switch between 3D and 2D modes by disabling the barrier on a per-pixel basis. Moreover, Dynallax can output four independent eye channels when two viewers are present, and both head-tracked viewers receive an independent pair of left-eye and right-eye perspective views based on their position in 3D space. The display device is constructed by using a dual-stacked LCD monitor where a dynamic barrier is rendered on the front display and a modulated virtual environment composed of two or four channels is rendered on the rear display. Dynallax was recently demonstrated in a small-scale head-tracked prototype system. This paper summarizes the concepts presented earlier, extends the discussion of various topics, and presents recent improvements to the system.  相似文献   

6.
This paper presents a robotic head for social robots to attend to scene saliency with bio-inspired saccadic behaviors. Scene saliency is determined by measuring low-level static scene information, motion, and object prior knowledge. Towards the extracted saliency spots, the designed robotic head is able to turn gazes in a saccadic manner while obeying eye–head coordination laws with the proposed control scheme. The results of the simulation study and actual applications show the effectiveness of the proposed method in discovering of scene saliency and human-like head motion. The proposed techniques could possibly be applied to social robots to improve social sense and user experience in human–robot interaction.  相似文献   

7.
Important aspects of present-day humanoid robot research is to make such robots look realistic and human-like, both in appearance, as well as in motion and mannerism. In this paper, we focus our study on advanced control leading to realistic motion coordination for a humanoid’s robot neck and eyes while tracking an object. The motivating application for such controls is conversational robotics, in which a robot head “actor” should be able to detect and make eye contact with a human subject. Therefore, in such a scenario, the 3D position and orientation of an object of interest in space should be tracked by the redundant head–eye mechanism partly through its neck, and partly through its eyes. In this paper, we propose an optimization approach, combined with a real-time visual feedback to generate the realistic robot motion and robustify it. We also offer experimental results showing that the neck–eye motion obtained from the proposed algorithm is realistic comparing to the head–eye motion of humans.  相似文献   

8.
For robots operating in real-world environments, the ability to deal with dynamic entities such as humans, animals, vehicles, or other robots is of fundamental importance. The variability of dynamic objects, however, is large in general, which makes it hard to manually design suitable models for their appearance and dynamics. In this paper, we present an unsupervised learning approach to this model-building problem. We describe an exemplar-based model for representing the time-varying appearance of objects in planar laser scans as well as a clustering procedure that builds a set of object classes from given observation sequences. Extensive experiments in real environments demonstrate that our system is able to autonomously learn useful models for, e.g., pedestrians, skaters, or cyclists without being provided with external class information.  相似文献   

9.
Social and collaborative aspects of interaction with a service robot   总被引:3,自引:0,他引:3  
To an increasing extent, robots are being designed to become a part of the lives of ordinary people. This calls for new models of the interaction between humans and robots, taking advantage of human social and communicative skills. Furthermore, human–robot relationships must be understood in the context of use of robots, and based on empirical studies of humans and robots in real settings. This paper discusses social aspects of interaction with a service robot, departing from our experiences of designing a fetch-and-carry robot for motion-impaired users in an office environment. We present the motivations behind the design of the Cero robot, especially its communication paradigm. Finally, we discuss experiences from a recent usage study, and research issues emerging from this work. A conclusion is that addressing only the primary user in service robotics is unsatisfactory, and that the focus should be on the setting, activities and social interactions of the group of people where the robot is to be used.  相似文献   

10.
Human responses to android and humanoid robots have become an important topic to social scientists due to the increasing prevalence of social and service robots in everyday life. The present research connects work on the effects of lateral (sideward) head tilts, an eminent feature of nonverbal human behavior, to the experience of android and humanoid robots. In two experiments (N = 402; N = 253) the influence of lateral head tilts on user perceptions of android and humanoid robots were examined. Photo portrayals of three different robots (Asimo, Kojiro, Telenoid) were manipulated. The stimuli included head tilts of −20°, −10° (left tilt), +10°, +20° (right tilt) and 0° (upright position). Compared to an upright head posture, we found higher scores for attributed human likeness, cuteness, and spine-tinglingness when the identical robots conveyed a head tilt. Results for perceived warmth, eeriness, attractiveness, and dominance varied with the robot or head tilts yielded no effects. Implications for the development and marketing of android and humanoid robots are discussed.  相似文献   

11.
Dual layered display or also called tensor display that consists of two panels in a stack can present full‐parallax 3D images with high resolution and continuous motion parallax by reconstructing corresponding light ray field within a viewing angle. The depth range where the 3D images can be displayed with reasonable resolution, however, is limited around the panel stack. In this paper, we propose a dual layered display that can present stereoscopic images to multiple viewers located at arbitrary positions in observer space with high resolution and large depth range. Combined with the viewer tracking system, the proposed method provides a practical way to realize high‐resolution large‐depth auto‐stereoscopic 3D display for multiple observers without restriction on the observer position and the head orientation.  相似文献   

12.
Safety, legibility and efficiency are essential for autonomous mobile robots that interact with humans. A key factor in this respect is bi-directional communication of navigation intent, which we focus on in this article with a particular view on industrial logistic applications. In the direction robot-to-human, we study how a robot can communicate its navigation intent using Spatial Augmented Reality (SAR) such that humans can intuitively understand the robot’s intention and feel safe in the vicinity of robots. We conducted experiments with an autonomous forklift that projects various patterns on the shared floor space to convey its navigation intentions. We analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift and carried out stimulated recall interviews (SRI) in order to identify desirable features for projection of robot intentions. In the direction human-to-robot, we argue that robots in human co-habited environments need human-aware task and motion planning to support safety and efficiency, ideally responding to people’s motion intentions as soon as they can be inferred from human cues. Eye gaze can convey information about intentions beyond what can be inferred from the trajectory and head pose of a person. Hence, we propose eye-tracking glasses as safety equipment in industrial environments shared by humans and robots. In this work, we investigate the possibility of human-to-robot implicit intention transference solely from eye gaze data and evaluate how the observed eye gaze patterns of the participants relate to their navigation decisions. We again analyzed trajectories and eye gaze patterns of humans while interacting with an autonomous forklift for clues that could reveal direction intent. Our analysis shows that people primarily gazed on that side of the robot they ultimately decided to pass by. We discuss implications of these results and relate to a control approach that uses human gaze for early obstacle avoidance.  相似文献   

13.
《Advanced Robotics》2013,27(5-6):581-603
There have been two major streams of research for the motion control of mobile robots: model-based deliberate control and sensor-based reactive control. Since the two schemes have complementary advantages and disadvantages, each cannot completely replace the other. There are a variety of environmental conditions that affect the performance of navigation. The motivation of this study is that multiple motion control schemes are required to survive in dynamic real environments. In this paper, we exploit two discrete motion controllers for mobile robots. One is the deliberate trajectory tracking controller and the other is the reactive dynamic window approach. We propose the selective coordination of two controllers on the basis of the generalized stochastic Petri net (GSPN) framework. The major scope of this paper is to clarify the advantage of the proposed controller based on the coordination of multiple controllers from the results of quantitative performance comparison among motion controllers. For quantitative comparison, both simulations and experiments in dynamic environments were carried out. In addition, it is shown that navigation experiences are accumulated in the GSPN formalism. The performance of navigation service can be significantly improved owing to the automatically stored experiences.  相似文献   

14.
We present a fully procedural method capable of generating in real time a wide range of locomotion for multilegged characters in a dynamic environment, without using any motion data. The system consists of several independent blocks: a Character Controller, a Gait/Tempo Manager, a three‐dimensional (3D) Path Constructor, and a Footprints Planner. The four modules work cooperatively to calculate in real time the footprints and the 3D trajectories of the feet and the pelvis. Our system can animate dozens of creatures using dedicated level of details techniques and is totally controllable allowing the user to design a multitude of locomotion styles through a user‐friendly interface. The result is a complete lower body animation that is sufficient for most of the chosen multilegged characters: arachnids, insects, imaginary n‐legged robots, and so on. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
Traditional display systems usually display 3D objects on static screens (monitor, wall, etc.) and the manipulation of virtual objects by the viewer is usually achieved via indirect tools such as keyboard or mouse. It would be more natural and direct if we display the object onto a handheld surface and manipulate it with our hands as if we were holding the real 3D object. In this paper, we propose a prototype system by projecting the object onto a handheld foam sphere. The aim is to develop an interactive 3D object manipulation and exhibition tool without the viewer having to wear spectacles. In our system, the viewer holds the sphere with his hands and moves it freely. Meanwhile we project well-tailored images onto the sphere to follow its motion, giving the viewer a virtual perception as if the object were sitting inside the sphere and being moved by the viewer. The design goal is to develop a low-cost, real-time, and interactive 3D display tool. An off-the-shelf projector-camera pair is first calibrated via a simple but efficient algorithm. Vision-based methods are proposed to detect the sphere and track its subsequent motion. The projection image is generated based on the projective geometry among the projector, sphere, camera and the viewer. We describe how to allocate the view spot and warp the projection image. We also present the result and the performance evaluation of the system.  相似文献   

16.
We present the path-planning techniques of the fire-escaping system for intelligent building, and use multiple mobile robots to present the experimental scenario. The fire-escaping system contains a supervised computer, an experimental platform, some fire-detection robots and some navigation robots. The mobile robot has the shape of a cylinder, and its diameter, height and weight are 10?cm, 15?cm and 1.5?kg, respectively. The mobile robot contains a controller module, two DC servomotors (including drivers), three IR sensor modules, a voice module and a wireless RF module. The controller of the mobile robot acquires the detection signals from reflective IR sensors through I/O pins and receives the command from the supervised computer via wireless RF interface. The fire-detection robot carries the flame sensor to detect fire sources moving on the grid-based experiment platform, and calculates the more safety escaping path using piecewise cubic Bezier curve on all probability escaping motion paths. Then the user interface uses A* searching algorithm to program escaping motion path to approach the Bezier curve on the grid-based platform. The navigation robot guides people moving to the safety area or exit door using the programmed escaping motion path. In the experimental results, the supervised computer programs the escaping paths using the proposed algorithms and presents movement scenario using the multiple smart mobile robots on the experimental platform. In the experimental scenario, the user interface transmits the motion command to the mobile robots moving on the grid-based platform, and locates the positions of fire sources by the fire-detection robots. The navigation robot guides people leaving the fire sources using the low-risk escaping motion path and moves to the exit door.  相似文献   

17.
With the increasing presence of robots in our daily life, there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’ mood, intention, and other aspects. During human-human interaction, personality traits have an important influence on human behavior, decision, mood, and many others. Therefore, we propose an efficient computational framework to endow the robot with the capability of under-standing the user’s personality traits based on the user’s nonverbal communication cues represented by three visual features including the head motion, gaze, and body motion energy, and three vocal features including voice pitch, voice energy, and mel-frequency cepstral coefficient (MFCC). We used the Pepper robot in this study as a communication robot to interact with each participant by asking questions, and meanwhile, the robot extracts the nonverbal features from each participant’s habitual behavior using its on-board sensors. On the other hand, each participant’s personality traits are evaluated with a questionnaire. We then train the ridge regression and linear support vector machine (SVM) classifiers using the nonverbal features and personality trait labels from a questionnaire and evaluate the performance of the classifiers. We have verified the validity of the proposed models that showed promising binary classification performance on recognizing each of the Big Five personality traits of the participants based on individual differences in nonverbal communication cues.   相似文献   

18.
Interaction between a personal service robot and a human user is contingent on being aware of the posture and facial expression of users in the home environment. In this work, we propose algorithms to robustly and efficiently track the head, facial gestures, and the upper body movements of a user. The face processing module consists of 3D head pose estimation, modeling nonrigid facial deformations, and expression recognition. Thus, it can detect and track the face, and classify expressions under various poses, which is the key for human–robot interaction. For body pose tracking, we develop an efficient algorithm based on bottom-up techniques to search in a tree-structured 2D articulated body model, and identify multiple pose candidates to represent the state of current body configuration. We validate these face and body modules in varying experiments with different datasets, and the experimental results are reported. The implementation of both modules can run in real-time, which meets the requirement for real-world human–robot interaction task. These two modules have been ported onto a real robot platform by the Electronics and Telecommunications Research Institute.  相似文献   

19.
This paper proposes a novel, low cost, and portable 360-degree cylindrical interactive autostereoscopic 3D display system. The proposed system consists of three parts: the optical architecture (for back-projecting image correctly on the cylindrical screen), the projection image transformation workflow (for image rectifying and generating multi-view images), and the 360-degree motion detection module (for identifying viewers’ locations and providing the corresponding views). Based on the proposed design, only one commercial micro projector is employed for the proposed cylindrical screen. The proposed display offers great depth perception (stereoacuity) with a special designed thick barrier sheet attached to the screen. The viewers are not required to wear special glasses and within appropriate range (< 5m) the viewers can view the screen at any distance and angle. The user study verified that the proposed display offers satisfactory depth perception (binocular parallax, shading distribution, and linear perspective) for various viewing distances and angles without noticeable discomfort. The production cost of the current prototype is about USD$ 300. With mass production, the unit cost is expected to decline to within USD$60. The proposed display system has the advantages of ease of use, low production cost, high portability and mobility. The proposed system is suitable for application such as museum virtual exhibition, remote meeting, multi-user online game, etc. We believe that the proposed system is very promising for the market of low-cost portable 360-degree interactive autosereoscopic displays.  相似文献   

20.
Underwater robot technology has shown impressive results in applications such as underwater resource detection. For underwater applications that require extremely high flexibility, robots cannot replace skills that require human dexterity yet, and thus humans are often required to directly perform most underwater operations. Wearable robots (exoskeletons) have shown outstanding results in enhancing human movement on land. They are expected to have great potential to enhance human underwater movement. The purpose of this survey is to analyze the state-of-the-art of underwater exoskeletons for human enhancement, and the applications focused on movement assistance while excluding underwater robotic devices that help to keep the temperature and pressure in the range that people can withstand. This work discusses the challenges of existing exoskeletons for human underwater movement assistance, which mainly includes human underwater motion intention perception, underwater exoskeleton modeling and human-cooperative control. Future research should focus on developing novel wearable robotic structures for underwater motion assistance, exploiting advanced sensors and fusion algorithms for human underwater motion intention perception, building up a dynamic model of underwater exoskeletons and exploring human-in-the-loop control for them.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号