首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the emerging paradigm of animate vision, the visual processes are not thought of as being independent of cognitive or motor processing, but as an integrated system within the context of visual behavior. Intimate coupling of sensory and motor systems have found to improve significantly the performance of behavior based vision systems. In order to study active vision systems one requires sensory-motor systems. Designing, building, and operating such a test bed is a challenging task. In this paper we describe the status of on-going work in developing a sensory-motor robotic system, R2H, with ten degrees of freedoms (DOF) for research in active vision. To complement the R2H system a Graphical Simulation and Animation (GSA) environment is also developed. The objective of building the GSA system is to create a comprehensive design tool to design and study the behavior of active systems and their interactions with the environment. GSA system aids the researchers to develop high performance and reliable software and hardware in a most effective manner. The GSA environment integrates sensing and motor actions and features complete kinematic simulation of the R2H system, it's sensors and it's workspace. With the aid of the GSA environment a Depth from Focus (DFF), Depth from Vergence, and Depth from Stereo modules are implemented and tested. The power and usefulness of the GSA system as a research tool is demonstrated by acquiring and analyzing images in the real and virtual worlds using the same software implemented and tested in the virtual world.This research was supported by the U.S. Department of Energy under the DOE's University Program in Robotics for Advanced Reactors (Universities of Florida, Michigan, Tennessee, Texas, and the Oak Ridge National Laboratory) under Contract No. DOE DE-FG02-86NE37968.  相似文献   

2.
Joint errors are inevitable in robot manipulation. These uncertainties propagate to give rise to translational and orientational errors in the position and orientation of the robot end‐effector. The displacement of the active vision head mounted on the robot end‐effector produces distortion of the projected object on the image. Upon active visual inspection, the observed dimension of a mechanical part is given dimension by the measurement on the projected edge segment on the image. The difference between the observed dimension and the actual dimension is the displacement error in active vision. For different motion of the active vision head, the resulting displacement errors are different. Given the uncertainties of the robot manipulator's joint errors, constraint propagation can be employed to assign the motion of the active sensor in order to satisfy the tolerance of the displacement errors for inspection. In this article, we define the constraint consistency and network satisfaction in the constraint network for the problem of displacement errors in active vision. A constraint network is a network where the nodes represent variables, or constraints, and the arcs represent the relationships between the output variables and the input variables of the constraints. In the displacement errors problem, the tolerance of the displacement errors and the translational and orientational errors of robot manipulators have interval values while the sensor motion has real values. Constraint propagation is developed to propagate the tolerance of displacement errors in the hierarchical interval constraint network in order to find the feasible robot motion. © 2002 Wiley Periodicals, Inc.  相似文献   

3.
4.
This paper presents the results of an investigation and pilot study into an active binocular vision system that combines binocular vergence, object recognition and attention control in a unified framework. The prototype developed is capable of identifying, targeting, verging on and recognising objects in a cluttered scene without the need for calibration or other knowledge of the camera geometry. This is achieved by implementing all image analysis in a symbolic space without creating explicit pixel-space maps. The system structure is based on the ‘searchlight metaphor’ of biological systems. We present results of an investigation that yield a maximum vergence error of ~6.5 pixels, while ~85% of known objects were recognised in five different cluttered scenes. Finally a ‘stepping-stone’ visual search strategy was demonstrated, taking a total of 40 saccades to find two known objects in the workspace, neither of which appeared simultaneously within the field of view resulting from any individual saccade.  相似文献   

5.
Zeng  Rui  Wen  Yuhui  Zhao  Wang  Liu  Yong-Jin 《计算可视媒体(英文)》2020,6(3):225-245
Computational Visual Media - Rapid development of artificial intelligence motivates researchers to expand the capabilities of intelligent and autonomous robots. In many robotic applications, robots...  相似文献   

6.
7.
In this paper, we propose a salient human detection method that uses pre-attentive features and a support vector machine (SVM) for robot vision. From three pre-attentive features (color, luminance and motion), we extracted three feature maps and combined them as a salience map. By using these features, we estimated a given object’s location without pre-assumptions or semi-automatic interaction. We were able to choose the most salient object even if multiple objects existed. We also used the SVM to decide whether a given object was human (among the candidate object regions). For the SVM, we used a new feature extraction method to reduce the feature dimensions and reflect the variations of local features to classifiers by using an edged-mosaic image. The main advantage of the proposed method is that our algorithm was able to detect salient humans regardless of the amount of movement, and also distinguish salient humans from non-salient humans. The proposed algorithm can be easily applied to human robot interfaces for human-like vision systems.
Hyeran ByunEmail:
  相似文献   

8.
《Advanced Robotics》2013,27(10):1097-1113
This paper proposes a real-time, robust and efficient three-dimensional (3D) model-based tracking algorithm. A virtual visual servoing approach is used for monocular 3D tracking. This method is similar to more classical non-linear pose computation techniques. A concise method for derivation of efficient distance-to-contour interaction matrices is described. An oriented edge detector is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating a M-estimator into the virtual visual control law via an iteratively re-weighted least-squares implementation. The method presented in this paper has been validated on several visual servoing experiments considering various objects. Results show the method to be robust to occlusion, changes in illumination and mis-tracking.  相似文献   

9.
Autonomous robot calibration using vision technology   总被引:2,自引:0,他引:2  
Yan  Hanqi   《Robotics and Computer》2007,23(4):436-446
Unlike the traditional robot calibration methods, which need external expensive calibration apparatus and elaborate setups to measure the 3D feature points in the reference frame, a vision-based self-calibration method for a serial robot manipulator, which only requires a ground-truth scale in the reference frame, is proposed in this paper. The proposed algorithm assumes that the camera is rigidly attached to the robot end-effector, which makes it possible to obtain the pose of the manipulator with the pose of the camera. By designing a manipulator movement trajectory, the camera poses can be estimated up to a scale factor at each configuration with the factorization method, where a nonlinear least-square algorithm is applied to improve its robustness. An efficient approach is proposed to estimate this scale factor. The great advantage of this self-calibration method is that only image sequences of a calibration object and a ground-truth length are needed, which makes the robot calibration procedure more autonomous in a dynamic manufacturing environment. Simulations and experimental studies on a PUMA 560 robot reveal the convenience and effectiveness of the proposed robot self-calibration approach.  相似文献   

10.
Focuses on the structure of robot sensing systems and the techniques for measuring and preprocessing 3-D data. To get the information required for controlling a given robot function, the sensing of 3-D objects is divided into four basic steps: transduction of relevant object properties (primarily geometric and photometric) into a signal; preprocessing the signal to improve it; extracting 3-D object features; and interpreting them. Each of these steps usually may be executed by several alternative techniques (tools). Tools for the transduction of 3-D data and data preprocessing are surveyed. The performance of each tool depends on the specific vision task and its environmental conditions, both of which are variable. Such a system includes so-called tool-boxes, one box for each sensing step, and a supervisor, which controls iterative sensing feedback loops and consists of a rule-based program generator and a program execution controller. Sensing step sequences and tools are illustrated for two 3-D vision applications at SRI International Company: visually guided robot arc welding and locating identical parts in a bin  相似文献   

11.
高峥 《计算机工程与设计》2004,25(11):2096-2097,2107
清华大学“985”重点科研项目——拟人机器人技术和应用系统研究开发的拟人机器人TBIPR-I包括视觉系统,系统新的软件部分应用了Direct Show技术,采用了组件的形式实现了主要部分。将新旧系统进行了对比,并给出了部分代码实例。  相似文献   

12.
针对工业机器人的弧焊、切割和涂胶等应用,介绍了视觉工业机器人作业路径规划系统.描述了摄像机坐标标定方法,采用图像采集技术获取环境信息,运用图像处理技术对采集的信号进行处理.实现了机器人实时控制和离线编程.实践证明,该系统重复精度高,能够满足弧焊、切割和涂胶的要求,能够适应生产线的需要.  相似文献   

13.
足球机器人的视觉系统是足球机器人必不可少的组成部分。机器人仅依赖于其视觉系统获得比赛场上的信息。讨论了一种经济的小型的CMUcam视觉模块,把此视觉模块安装在智能机器人平台—能力风暴智能机器人上,使每一个机器人都有独立的视觉,从而使机器人成为全自主式的足球机器人。  相似文献   

14.
象棋机器人视觉系统设计   总被引:2,自引:0,他引:2  
视觉是象棋机器人软件的重要组成部分,其核心工作是棋盘图像二值化、棋子检测和棋子识别。并对棋盘全局二值化存在的问题,提出了基于相邻像素灰度差阈值的棋盘图像二值化方法;针对棋子文字方向任意的现象,提出了基于年轮统计的棋子文字识别方法。实践证明,该方法处理速度快、识别效果理想。  相似文献   

15.
The relationship between the three-dimensional coordinates of a point and the corresponding two-dimensional coordinates of its image, as seen by a camera, can be expressed in terms of a 3 by 4 matrix using the homogeneous coordinate system. This matrix is known more generally as the transformation matrix and it is well known that such a matrix can be determined experimentally by measuring the image coordinates of six or more points in space, whose three-dimensional coordinates are known.Such a transformation matrix can be derived analytically from knowledge of the camera position, orientation, focal length and scaling and translation parameters in the image plane. However, the inverse problem of computing the camera location and orientation from the transformation matrix involves solution of simultaneous nonlinear equations in several variables and is considered difficult.In this paper we present a new and simple analytical technique that accomplishes this inversion rather easily. This technique is quite powerful and has applications to a wide variety of problems in Computer Vision for both static and dynamic scenes. The technique has been implemented as a C program running under Unix and works well on real data.  相似文献   

16.
A real-time vision module for interactive perceptual agents   总被引:2,自引:0,他引:2  
Abstract. Interactive robotics demands real-time visual information about the environment. Real-time vision processing, however, places a heavy load on the robot's limited resources, which must accommodate multiple other processes running simultaneously. This paper describes a vision module capable of providing real-time information from ten or more operators while maintaining at least a 20-Hz frame rate and leaving sufficient processor time for a robot's other capabilities. The vision module uses a probabilistic scheduling algorithm to ensure both timely information flow and a fast frame capture. In addition, it tightly integrates the vision operators with control of a pan-tilt-zoom camera. The vision module makes its information available to other modules in the robot architecture through a shared memory structure. The information provided by the vision module includes the operator information along with a time stamp indicating information relevance. Because of this design, our robots are able to react in a timely manner to a wide variety of visual events.  相似文献   

17.
视觉伺服的乒乓球机器人系统作为典型的"手眼系统",是研究高速视觉感知和快速伺服运动的理想平台,其涉及的高速物体识别跟踪、快速精确轨迹预测及机械臂伺服准确回球等关键技术在工业、军事等领域有广泛的应用前景.本文提出了乒乓球机器人的高速视觉伺服系统实现方法,包括基于特征直方图统计和快速轮廓搜索的目标识别算法,基于模型参数学习和自适应模型调整的物体运动状态估计和轨迹预测算法,及基于轨迹预测的灵巧臂回球规划算法.通过实验验证了各算法的实时性和高效性,并在165cm高的仿人机器人"悟"和"空"上成功实现了双机器人对打和与人对打任务.  相似文献   

18.
本文介绍了自主机器人视觉系统的体系结构,并对视觉处理技术和主要算法进行了研究.同时提出了目已的改进算法.  相似文献   

19.
20.
Online robot calibration based on vision measurement   总被引:1,自引:0,他引:1  
Robot calibration is a useful diagnostic method to improve positioning accuracy in robot production and maintenance. Unlike traditional calibration methods that require expensive equipment and complex steps, a vision-based online robot calibration method that only requires several reference images is presented in this paper. The method requires a camera that is rigidly attached to the robot end effector (EE), and a calibration board must be settled around the robot where the camera can see it. An efficient automatic approach to detect the corners from the images of the calibration board is proposed. The poses of the robot can be estimated from the detected corners. The kinematic parameters can be conducted automatically based on the known poses of the robot. Unlike in the existing self-calibration methods, the great advantage of this online self-calibration method is that the entire process of robot calibration is automatic and without any manual intervention, enabling the robot calibration to be completed online when the robot is working. Therefore, the proposed approach is particularly suitable for unknown environments, such as deep sea or outer space. In these high-temperature and/or high-pressure environments, the shapes of the robot links are easy to change. Thus, the robot kinematic parameters are changed by allowing the robot to grab objects with different qualities to verify the performance of the online robot calibration. Experimental studies on a GOOGOL GRB3016 robot show that the proposed method has high accuracy, convenience, and high efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号