首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents some of the computer vision techniques that were employed in order to automatically select features, measure features' displacements, and evaluate measurements during robotic visual servoing tasks. We experimented with a lot of different techniques, but the most robust proved to be the Sum-of-Squared Differences (SSD) optical flow technique. In addition, several techniques for the evaluation of the measurements are presented. One important characteristic of these techniques is that they can also be used for the selection of features for tracking in conjunction with several numerical criteria that guarantee the robustness of the servoing. These techniques are important aspects of our work since they can be used either on-line or off-line. An extension of the SSD measure to color images is presented and the results from the application of these techniques to real images are discussed. Finally, the derivation of depth maps through the controlled motion of the handeye system is outlined and the important role of the automatic feature selection algorithm in the accurate computation of the depth-related parameters is highlighted.The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the funding agencies. Tel. (612) 625-0163; e-mail address: npapas@cs.umn.edu.  相似文献   

2.
针对连续体机器人利用视觉感知实现对外部目标瞄准跟踪控制问题,大连理工大学彭海军教授团队设计研发了一款基于视觉伺服的自主连续体机器人.经世界记录认证(WRCA)官方现场严格测试,该机器人被认定为“最远距离命中移动靶心的自主连续体机器人”,最远有效跟踪距离为30.129米.  相似文献   

3.
郭军军  韩崇昭 《自动化学报》2018,44(8):1425-1435
针对大规模传感器网络(Large-scale sensor networks)的目标跟踪问题, 本文在贝叶斯(Bayes)框架下, 提出了一种全新的基于传感器选择的多传感器目标跟踪算法.算法的具体思路为:首先基于Bayes框架, 根据不同的管理目标, 推导出传感器选择的目标函数; 然后根据目标函数, 计算出相应的传感器选择方案; 最后将选择的传感器进行数据融合, 求得传感器网络的目标跟踪结果.相比传统的基于量测野值点剔除思想的目标跟踪算法以及基于系统偏差估计的传感器配准算法, 本文提出的基于传感器选择的多传感器目标跟踪算法不仅目标跟踪精度更高, 且跟踪性能更稳定.同时本文提出的传感器选择算法还可以适用于杂波数目较少的目标跟踪场景.仿真结果说明了本文所提算法的有效性.  相似文献   

4.
Detection and tracking for robotic visual servoing systems   总被引:1,自引:0,他引:1  
Robot manipulators require knowledge about their environment in order to perform their desired actions. In several robotic tasks, vision sensors play a critical role by providing the necessary quantity and quality of information regarding the robot's environment. For example, “visual servoing” algorithms may control a robot manipulator in order to track moving objects that are being imaged by a camera. Current visual servoing systems often lack the ability to detect automatically objects that appear within the camera's field of view. In this research, we present a robust “figureiground” framework for visually detecting objects of interest. An important contribution of this research is a collection of optimization schemes that allow the detection framework to operate within the real-time limits of visual servoing systems. The most significant of these schemes involves the use of “spontaneous” and “continuous” domains. The number and location of continuous domains are. allowed to change over time, adjusting to the dynamic conditions of the detection process. We have developed actual servoing systems in order to test the framework's feasibility and to demonstrate its usefulness for visually controlling a robot manipulator.  相似文献   

5.
The use of active deformable models in model-based robotic visual servoing   总被引:1,自引:0,他引:1  
This paper presents a new approach for visual tracking and servoing in robotics. We introduce deformable active models as a powerful means for tracking a rigid or semi-rigid (possibly partially occluded) object in movement within the manipulator's workspace. Deformable models imitate, in real-time, the dynamic behavior of elastic structures. These computer-generated models are designed to capture the silhouette of objects with well-defined boundaries, in terms of image gradient. By means of an eye-in-hand robot arm configuration, the desired motion of the end-effector is computed with the objective of keeping the target's position and shape invariant with respect to the camera frame. Optimal estimation and control techniques (LQG regulator) have been successfully implemented in order to deal with noisy measurements provided by our vision sensor. Experimental results are presented for the tracking of a rigid or semi-rigid object. The experiments performed in a real-time environment show the effectiveness and robustness of the proposed method for servoing tasks based on visual feedback.  相似文献   

6.
In this paper we present a modular scheme for designing and evaluating different control systems for position based dynamic look and move visual servoing systems. This scheme is particularly applied to a 6 DOF industrial manipulator equipped with a camera mounted on its end effector. The manipulator with its actuators and its current feedback loops can be modeled as a Cartesian device commanded through a serial line. In this case the manipulator can be considered as a decoupled system with 6 independent loops. The use of computer vision as feedback transducer strongly affects the closed loop dynamics of the overall system, so that a visual controller is required for achieving fast response and high control accuracy. Due to the long delay in generating the control signal, it is necessary to carefully select the visual controller. In this paper we present a framework that allows the study of some conventional and new techniques to design this visual controller. Besides an experimental setup has been built and used to evaluate and compare the performance of the position based dynamic look and move system with different controllers. Some criterions for selecting the best strategy for each task are established. Quite a lot of results relative to different trajectory tracking control strategies are presented, showing both simulation and real platform responses.  相似文献   

7.
Robotic manipulation systems that operate in unstructured environments must be responsive to feedback from sensors that are disparate in both location and modality. This paper describes a distributed framework for assimilating the disparate feedback provided by force and vision sensors, including active vision sensors, for robotic manipulation systems. The main components of the expectation-based framework include object schemas and port-based agents. Object schemas represent the manipulation task internally in terms of geometric models with attached sensor mappings. Object schemas are dynamically updated by sensor feedback, and thus provide an ability to perform three dimensional spatial reasoning during task execution. Because object schemas possess knowledge of sensor mappings, they are able to both select appropriate sensors and guide active sensors based on task characteristics. Port-based agents are the executors of reference inputs provided by object schemas and are defined in terms of encapsulated control strategies. Experimental results demonstrate the capabilities of the framework in two ways: the performance of manipulation tasks with active camera-lens systems, and the assimilation of force and vision sensory feedback.  相似文献   

8.
智能制造装备视觉检测控制方法综述   总被引:11,自引:0,他引:11  
为满足智能制造装备产业对机器视觉技术的巨大需求,本文结合装备技术特点和特殊应用环境,提出了通用的机器视觉检测控制技术体系,弥补了当前研究的不足.本文首先对该技术体系的成像系统、自动图像获取、图像预处理、标定与分割、识别检测、视觉伺服与优化控制等关键核心技术,进行了总结和阐述.然后提出了视觉检测控制系统设计的一般原理,并结合3种典型装备,对其具体应用进行详细说明.最后根据智能制造装备不断增长的高可靠性、智能化、高速高精度作业等需求,探讨了视觉检测控制技术所面临的新问题和新挑战.  相似文献   

9.
Automatic disassembly tasks in the engine compartment of a used car constitute a challenge for control of a disassembly robot by machine vision. Experience in exploratory experiments under such conditions forced us to abandon data-driven aggregation of edge elements into straight-line data segments in favor of a direct association of individual edge elements with model segments obtained from scene domain models of tools and workpieces. In addition, we had to switch from a conventional single camera hand-eye configuration to a movable stereo-configuration mounted on a separate observer robot. A generalisation of our model-based tracking includes the parameters, which characterize the relative pose of one camera with respect to the other one of the stereo-camera set-up, into the set of parameters to be re-estimated for each new stereo image pair. This results in a continuous re-calibration during a relative movement between stereo-camera set-up and tracked objects. Our approach had to be extended further in order to cope with non-polyhedral objects.The methodological improvements of machine vision in the course of this research are treated in detail. We discuss, moreover, the systematic trading-off of computational resources for increased robustness which is vital for visual control of automatic disassembly robots.  相似文献   

10.
Initializing a student model for individualized tutoring in educational applications is a difficult task, since very little is known about a new student. On the other hand, fast and efficient initialization of the student model is necessary. Otherwise the tutoring system may lose its credibility in the first interactions with the student. In this paper we describe a framework for the initialization of student models in Web-based educational applications. The framework is called ISM. The basic idea of ISM is to set initial values for all aspects of student models using an innovative combination of stereotypes and the distance weighted k-nearest neighbor algorithm. In particular, a student is first assigned to a stereotype category concerning her/his knowledge level of the domain being taught. Then, the model of the new student is initialized by applying the distance weighted k-nearest neighbor algorithm among the students that belong to the same stereotype category with the new student. ISM has been applied in a language learning system, which has been used as a test-bed. The quality of the student models created using ISM has been evaluated in an experiment involving classroom students and their teachers. The results from this experiment showed that the initialization of student models was improved using the ISM framework.  相似文献   

11.
本文基于正在起步的人工视觉技术,在人工视觉成像模型研究以及模拟评价实验开展的基础上,提出一种基于显著性局部特征生成的像素化成像模型,并设计主观评价打分模拟实验来考察这一模型的性能.实验结果初步证实,这一模型能够向受试者优先呈现原始图像中的特征显著区域,因而使受试者主观感受到更加丰富的视觉信息.从而,该模型能够为这一新兴领域的发展提供参考.  相似文献   

12.
Y. Iwatani 《Advanced Robotics》2013,27(17):1351-1359
This paper proposes a robust template-based visual tracking algorithm. The proposed algorithm combines global optimization and local optimization. The global optimization is performed in translational matching, and the local optimization is implemented by gradient descent in homography-based matching. Translational matching is robust to large translation of the reference image, although it is not robust to rotation, or scaling. In contrast, homography-based matching is robust to rotation, and scaling, although it is not robust to large translation. The proposed algorithm is a feedback combination of the two matching algorithms. Translational matching modifies the initial value for gradient descent in homography-based matching. Homography-based matching updates the reference image for translational matching. The proposed feedback combination inherits advantages from both translational matching and homography-based matching. Robot experiments demonstrate the robustness of the proposed feedback combination to composite transformations of translation, rotation, and scaling.  相似文献   

13.
The relative pose between inertial and visual sensors equipped in autonomous robots is calibrated in two steps. In the first step, the sensing system is moved along a line, the orientations in the relative pose are computed from at least five corresponding points in the two images captured before and after the movement. In the second step, the translation parameters in the relative pose are obtained with at least two corresponding points in the two images captured before and after one step motion. Experiments are conducted to verify the effectiveness of the proposed method.  相似文献   

14.
In this paper, a framework is proposed for the distributed control and coordination of multiagent systems (MASs). In the proposed framework, the control of MASs is regarded as achieving decentralized control and coordination of agents. Each agent is modeled as a coordinated hybrid agent, which is composed of an intelligent coordination layer and a hybrid control layer. The intelligent coordination layer takes the coordination input, plant input, and workspace input. In the proposed framework, we describe the coordination mechanism in a domain-independent way, i.e., as simple abstract primitives in a coordination rule base for certain dependence relationships between the activities of different agents. The intelligent coordination layer deals with the planning, coordination, decision making, and computation of the agent. The hybrid control layer of the proposed framework takes the output of the intelligent coordination layer and generates discrete and continuous control signals to control the overall process. To verify the feasibility of the proposed framework, experiments for both heterogeneous and homogeneous MASs are implemented. The proposed framework is applied to a multicrane system, a multiple robot system, and a MAS consisting of an overhead crane, a mobile robot, and a robot manipulator. It is demonstrated that the proposed framework can model the three MASs. The agents in these systems are able to cooperate and coordinate to achieve a global goal. In addition, the stability of systems modeled using the proposed framework is also analyzed.  相似文献   

15.
Particle tracking methods are a versatile computational technique central to the simulation of a wide range of scientific applications. In this paper, we present a new parallel particle tracking framework for the applications of scientific computing. The framework includes the in-element particle tracking method, which is based on the assumption that particle trajectories are computed by problem data localized to individual elements, as well as the dynamic partitioning of particle-mesh computational systems. The ultimate goal of this research is to develop a parallel in-element particle tracking framework capable of interfacing with a different order of accuracy of ordinary differential equation (ODE) solver. The parallel efficiency of such particle-mesh systems depends on the partitioning of both the mesh elements and the particles; this distribution can change dramatically because of movement of the particles and adaptive refinement of the mesh. To address this problem we introduce a combined load function that is a function of both the particle and mesh element distributions. We present experimental results that detail the performance of this parallel load balancing approach for a three-dimensional particle-mesh test problem on an unstructured, adaptive mesh, and demonstrate the ability of interfacing with different ODE solvers.  相似文献   

16.
有源传感网络中目标跟踪的传感器调度方法   总被引:2,自引:0,他引:2  
Wireless sensor network (WSN) of active sensors suffers from serious inter-sensor interference (ISI) and imposes new design and implementation challenges. In this paper, based on the ultrasonic sensor network, two time-division based distributed sensor scheduling schemes are proposed to deal with ISI by scheduling sensors periodically and adaptively respectively. Extended Kalman filter (EKF) is used as the tracking algorithm in distributed manner. Simulation results show that the adaptive sensor scheduling scheme can achieve superior tracking accuracy with faster tracking convergence speed.  相似文献   

17.
基于事件的相机是一种生物启发的新型视觉传感器,可实时高效地捕捉场景的变化.与基于帧的传统相机不同,事件相机仅报告触发的像素级亮度变化(称为事件),并以微秒级分辨率输出异步事件流.该类视觉传感器已经逐渐成为图像处理、计算机视觉、机器人感知与状态估计、神经形态学等领域的研究热点.首先,本文阐述了事件相机的基本原理、发展历程、优势与挑战;然后,介绍了3种典型事件相机(包括DVS(dynamic vision sensor)、ATIS(asynchronous time based image sensor)和DAVIS(dynamic and active pixel vision sensor))以及多种新型事件相机;接下来,重点回顾了事件相机在特征提取、深度估计、光流估计、强度图像估计与三维重建、目标识别与跟踪、自主定位与位姿估计、视觉里程计与SLAM、多传感器融合等方面的应用研究;最后,归纳了事件相机的研究进展,并探讨了未来的发展趋势.  相似文献   

18.
Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some user-specified characteristics of such objects. For such systems, we deal with the tasks of determining measures for evaluating their performance and of determining good sensor configurations that would maximize such measures for better system performance. We introduce a constraint in sensor planning that has not been addressed earlier: visibility in the presence of random occluding objects. occlusion causes random loss of object capture from certain necessitates the use of other sensors that have visibility of this object. Two techniques are developed to analyze such visibility constraints: a probabilistic approach to determine “average” visibility rates and a deterministic approach to address worst-case scenarios. Apart from this constraint, other important constraints to be considered include image resolution, field of view, capture orientation, and algorithmic constraints such as stereo matching and background appearance. Integration of such constraints is performed via the development of a probabilistic framework that allows one to reason about different occlusion events and integrates different multi-view capture and visibility constraints in a natural way. Integration of the thus obtained capture quality measure across the region of interest yields a measure for the effectiveness of a sensor configuration and maximization of such measure yields sensor configurations that are best suited for a given scenario. The approach can be customized for use in many multi-sensor applications and our contribution is especially significant for those that involve randomly occurring objects capable of occluding each other. These include security systems for surveillance in public places, industrial automation and traffic monitoring. Several examples illustrate such versatility by application of our approach to a diverse set of different and sometimes multiple system objectives. Most of this work was done while A. Mittal was with Real-Time Vision and Modeling Department, Siemens Corporate Research, Princeton, NJ 08540.  相似文献   

19.
A central task of computer vision is to automatically recognize objects in real-world scenes. The parameters defining image and object spaces can vary due to lighting conditions, camera calibration and viewing position. It is therefore desirable to look for geometric properties of the object which remain invariant under such changes in the observation parameters. The study of such geometric invariance is a field of active research. This paper presents the theory and computation of projective invariants formed from points and lines using the geometric algebra framework. This work shows that geometric algebra is a very elegant language for expressing projective invariants using n views. The paper compares projective invariants involving two and three cameras using simulated and real images. Illustrations of the application of such projective invariants in visual guided grasping, camera self-localization and reconstruction of shape and motion complement the experimental part.  相似文献   

20.
毕超  郝雪  刘孟晨  刘勇 《传感技术学报》2019,32(10):1515-1521
为了能够应用所搭建的四轴视觉坐标测量系统对批量气膜孔的分布位置进行检测,结合四轴运动系统与工业相机的特点,开展了将测量数据由图像空间转化到叶片空间过程中的坐标系建立与转换关系研究。在应用过程中,建立了图像像素坐标系、图像物理坐标系、基准坐标系、回转台坐标系和叶片坐标系,并确立了这些坐标系之间的相互转换关系,从而实现了将工业相机所采集到的图像数据转化成物理测量数据,并最终转换到叶片坐标系中。最后,为了验证该方法的正确性和有效性,选取了某个高压涡轮导向叶片作为被测物,首先对其上的一个气膜孔进行了分布位置的多次等精度重复测量实验,检验了该测量系统的重复性精度,而后对该叶片上的一排12个气膜孔进行了测量实验。实验结果表明,本文所提出的坐标系建立与转换方法,可以完成气膜孔分布位置的检测任务,从而为后续与气膜孔的设计数据进行比对并做出评价奠定了坚实基础。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号