首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 427 毫秒
1.
Robotic soccer is nowadays a popular research domain in the area of multi-robot systems. In the context of RoboCup, the Middle Size League is one of the most challenging. This paper presents an efficient omnidirectional vision system for real-time object detection, developed for the robotic soccer team of the University of Aveiro, CAMBADA. The vision system is used to find the ball and white lines, which are used for self-localization, as well as to find the presence of obstacles. Algorithms for detecting these objects and also for calibrating most of the parameters of the vision system are presented in this paper. We also propose an efficient approach for detecting arbitrary FIFA balls, which is an important topic of research in the Middle Size League. The experimental results that we present show the effectiveness of our algorithms, both in terms of accuracy and processing time, as well as the results that the team has been achieving: 1st place in RoboCup 2008, 3rd place in 2009 and 1st place in the mandatory technical challenge in RoboCup 2009, where the robots have to play with an arbitrary standard FIFA ball.  相似文献   

2.
This paper presents a novel technique of stereo vision based on the combination of an omnidirectional camera and a perspective camera. The technique combines the 360° field of view of the omnidirectional camera with the long field of view of a perspective camera. We describe the setup of such a camera system and how it can be used to achieve 3D-position estimates. Furthermore, we develop a maximum likelihood approach and a Bayesian approach that are able to fuse monocular and binocular observations of the same object to estimate its position and movement and show how this technique can be applied successfully in the RoboCup MiddleSizeLeague.  相似文献   

3.
A new biologically-inspired vision sensor made of one hundred “eyes” is presented, which is suitable for real-time acquisition and processing of 3-D image sequences. This device, named the Panoptic camera, consists of a layered arrangement of approximately 100 classical CMOS imagers, distributed over a hemisphere of 13 cm in diameter. The Panoptic camera is a polydioptric system where all imagers have their own vision of the world, each with a distinct focal point, which is a specific feature of the Panoptic system. This enables 3-D information recording such as omnidirectional stereoscopy or depth estimation, applying specific signal processing. The algorithms dictating the image reconstruction of an omnidirectional observer located at any point inside the hemisphere are presented. A hardware architecture which has the capability of handling these algorithms, and the flexibility to support additional image processing in real time, has been developed as a two-layer system based on FPGAs. The detail of the hardware architecture, its internal blocks, the mapping of the algorithms onto the latter elements, and the device calibration procedure are presented, along with imaging results.  相似文献   

4.
Recent developments in wireless sensor networks have made feasible distributed camera networks, in which cameras and processing nodes may be spread over a wide geographical area, with no centralized processor and limited ability to communicate a large amount of information over long distances. This paper overviews distributed algorithms for the calibration of such camera networks- that is, the automatic estimation of each camera's position, orientation, and focal length. In particular, we discuss a decentralized method for obtaining the vision graph for a distributed camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. We next describe a distributed algorithm in which each camera performs a local, robust nonlinear optimization over the camera parameters and scene points of its vision graph neighbors in order to obtain an initial calibration estimate. We then show how a distributed inference algorithm based on belief propagation can refine the initial estimate to be both accurate and globally consistent.   相似文献   

5.
We propose a homology between thermodynamic systems and images for the treatment of time‐varying imagery. A physical system colder than its surroundings absorbs heat from the surroundings. Furthermore, the absorbed heat increases the entropy of the system, which is closely related to its disorder as given by the definition of Clausius and Boltzmann. Because pixels of an image are viewed as a state of lattice‐like molecules in a thermodynamic system, the task of reckoning the entropy variations of pixels is similar to estimating their degrees of disorder. We apply this homology to the uncalibrated stereo matching problem. The absence of calibrations alleviates user efforts to install stereo cameras and enables users to freely modify the composition of the cameras. The proposed method is also robust to differences in brightness, white balancing, and even focusing between stereo image pairs. These peculiarities enable users to estimate the depths of interesting objects in practical applications without much effort in order to set and maintain a stereo vision setup. Users can consequently utilize two webcams as a stereo camera.  相似文献   

6.
基于单视图像的球体姿态估计   总被引:1,自引:0,他引:1  
该文给出一种基于图像信息估计3D目标球体及其中心轴孔空间姿态的视觉检测技术。若相机焦距已知,且给定球体与圆特征形状参数,则可由单视方法估计球心与圆特征中心位置及其法向方向,从而可由球及中轴构成的多个圆特征给出对目标球体姿态的初步估计。由于图像噪声及投影椭圆拟合误差的存在,每一个特征的独立估计结果并不完全一致,进一步引入非线性最小二乘方法对上述初步结果进行优化以改善估计精度。仿真及实际图像处理结果验证了算法的有效性。  相似文献   

7.
In this paper we present a new technique for ball detection in the RoboCup environment. The proposed method is an advanced version of the Hough transform and its main detection feature is local rotational invariance. This feature is utilized by means of the structure tensor technique, which optimizes linear parameters in partial differential equations within a local neighborhood. For our application we use the vanishing angular derivative describing the rotational invariance of the ball. Since no color information is used, the method can be used for the detection of any arbitrarily colored ball using a vision system. Compared to the standard version of the Hough transform for circle detection our version is a lot more robust, because the structure tensor technique directly provides the circles’ center coordinates as well as the radii and the accumulator space is only used to improve accuracy.  相似文献   

8.
激光跟踪视觉导引测量中靶标球球心定位方法   总被引:1,自引:0,他引:1  
为解决现有激光跟踪测量系统中存在的搜索区域大、定位速度慢等问题,提出了一种基于单目视觉测量系统求解激光跟踪仪靶标球球心三维坐标,以此作为导引信息,引导单台激光跟踪仪实现多基准点自动测量的方法。对视觉测量所采用的椭圆识别和基于单目摄像机的球心定位算法进行了研究。首先,根据靶标球成像的特点,介绍了基于Gestalt的椭圆识别算法,实现了靶标球图像特征的自动提取与识别。然后,介绍了基于单目摄像机的球心定位算法,即根据图像椭圆方程、相机焦距和球半径求解球心三维坐标,并进行了仿真验证。最后,进行了实际的靶标球图像自动识别与球心定位实验。结果表明,采用文中方法测量两靶标球间已知距离约为550 mm时的RMS误差为1.728 2 mm。激光跟踪仪激光束搜索范围得到大幅减小,基本满足自动测量的快速性和高效性等要求。  相似文献   

9.
飞行时间(TOF)深度相机等深度传感器可以实时、准确地获取深度信息,在计算机视觉领域受到广泛关注。本文以获取同一视野下场景的纹理信息和深度信息为目的,在传统的棋盘格标定方法基础上,针对TOF深度相机的低分辨率和较大的径向畸变的特点,采用角点稀疏的棋盘格作为标定板以提高角点检测的精度,提出一种TOF深度相机和彩色相机的联合标定方法。首先对彩色相机和深度相机单独标定,使用传统的棋盘格标定方法获得彩色相机的内部参数,对使用深度相机所拍的强度图进行畸变校正后再求得深度相机的内部参数。然后,固定两相机内参,多次实验获得两相机之间的相对位置关系,并使用最小二乘法进行优化。实验结果表明,该方法标定精度高,彩色相机的重投影误差最多可减小0.15个像素,深度相机的重投影误差最多可减少0.09个像素,根据标定结果将深度相机所得的深度图投影到彩色相机的视野下所得的投影深度图能和彩色图像精确对齐。   相似文献   

10.
Panoptic is a custom spherical light field camera used as a polydioptric system where imagers are distributed over a hemispherical surface, each having its own vision of the surroundings and a distinct focal plane. The spherical light field camera records light information from any direction around its center. This paper revises previously developed Nearest Neighbor and Linear blending techniques. Novel Gaussian blending and Restricted Gaussian blending techniques for vision reconstruction of a virtual observer located inside the spherical geometry are presented. These new blending techniques improve the output quality of the reconstructed image with respect to the ordinary stitching techniques and simpler image blending algorithms. A comparison of the developed blending algorithms is also given in this paper. A hardware architecture based on Field Programmable Gate Arrays (FPGA) enabling the real-time implementation of the blending algorithms is presented, along with the imaging results and resource utilization comparison. A recorded omnidirectional video is attached as a supplementary material.  相似文献   

11.
使用立体视觉系统可确定任意物体的三维轮廓及其表面任意点的位置信息和深度信息。本文将立体视觉运用于人体头部目标检测,设计了一种基于图像深度信息处理的人体头部识别系统。系统采用Xtion摄像机采集场景的深度图像,并对其进行特征分析,根据深度图像的特点以及头部的特征确定头部目标区域。再对目标区域采用Mean Shift算法进行聚类处理,得到清晰的图像边缘。最后通过基于动态阈值的一维熵函数分割法实现头部的分割识别。该系统可快速锁定目标区域,减少了算法的计算量,大大提高了系统的识别速度。此外,采集深度图像的xtion摄像机悬挂在目标场景的正上方,因而较好的解决了目前人体目标检测受遮挡的问题。经实验论证,该系统有较高的识别精度。  相似文献   

12.
Dual camera intelligent sensor for high definition 360 degrees surveillance   总被引:1,自引:0,他引:1  
A novel integrated multi-camera video-sensor (panoramic scene analysis PSA) system is proposed for surveillance applications. In the proposed set-up, an omnidirectional imaging device is used in conjunction with a pan tilt zoom (PTZ) camera leading to an innovative kind of sensor that is able to automatically track at a higher zoom level any moving object within the guarded area. In particular, the catadioptric sensor is calibrated and used in order to track every single moving object within its 360 degree field of view. Omnidirectional image portions are eventually rectified and pan, tilt and zoom parameters of the moving camera are automatically adjusted by the system in order to track detected objects. In addition a co-operative strategy was developed for the selection of the object to be tracked by the PTZ sensor in the case of multiple targets.  相似文献   

13.
When a team of robots is built with the objective of playing soccer, the coordination and control algorithms must reason, decide and actuate based on the current conditions of the robot and its surroundings. This is where sensor and information fusion techniques appear, providing the means to build an accurate model of the world around the robot, based on its own limited sensor information and the also limited information obtained through communication with the team mates. One of the most important elements of the world model is the robot self-localization, as to be able to decide what to do in an effective way, it must know its position in the field of play. In this paper, the team localization algorithm is presented focusing on the integration of visual and compass information. An important element in a soccer game, perhaps the most important, is the ball. To improve the estimations of the ball position and velocity, two different techniques have been developed. A study of the visual sensor noise is presented and, according to this analysis, the resulting noise variation is used to define the parameters of a Kalman filter for ball position estimation. Moreover, linear regression is used for velocity estimation purposes, both for the ball and the robot. This implementation of linear regression has an adaptive buffer size so that, on hard deviations from the path (detected using the Kalman filter), the regression converges faster. A team cooperation method based on sharing the ball position is presented. Other important data during the soccer game is obstacle data. This is an important challenge for cooperation purposes, allowing the improvement of team strategy with ball covering, dribble corridor estimation, pass lines, among other strategic possibilities. Thus, detecting the obstacles is ceasing to be enough and identifying which obstacles are team mates and opponents is becoming a need. An approach for this identification is presented, considering the visual information, the known characteristics of the team robots and shared localization among team members. The described work was implemented on the CAMBADA team and allowed it to achieve particularly good performances in the last two years, with a 1st and a 3rd place in the world championship RoboCup 2008 and RoboCup 2009 editions, respectively, as well as distinctively achieve 1st place in 2008 and 2009 editions of the Portuguese Robotics Open.  相似文献   

14.
研究了由双曲面镜面与球面镜面共同组成折反射系统并结合普通透视摄像头所构成的一类特定全景视觉系统,推导了该全方位摄像头成像模型以及外极曲线的计算过程,分析了外极曲线特性,最后通过实验进行理论验证.实验结果表明,本文所提出的计算方法可快速准确地得到外极曲线,有利于全景图像立体匹配过程中对应点的搜索匹配.  相似文献   

15.
针对目前轴承钢球表面缺陷提取方法的不足,设计了一种通过图像来提取钢球产品表面缺陷的算法。该算法首先利用分段线性灰度算法对钢球表面微小缺陷进行增强,再结合最大熵来实现对钢球表面缺陷的自动分割,最后采用投影原理和二维联合统计算法,完成对缺陷的快速提取和区域归类。实验表明本文算法对钢球表面五类缺陷的提取可以达到很好的效果,在basler工业相机,900×560分辨率的条件下,算法耗时小于30ms,能够满足钢球表面缺陷检测的实时性要求。  相似文献   

16.
Massively parallel processor-per-pixel single-instruction multiple data arrays are being successfully used for early vision applications in smart sensor systems; however, they are inherently inefficient when executing algorithms involving propagation of binary signals, such as the geodesic reconstruction. Yet, these algorithms, at the interface between pixel-level and object-level image processing, should be implemented on the vision chip to facilitate data reduction at the sensor level. A cellular asynchronous network is presented in this paper, which can be used to execute binary propagation operations. The proposed circuit is optimized in terms of speed and power consumption. In 0.35-/spl mu/m technology, the simulated propagation speed is 0.18 ns per pixel and the total energy expended per propagation is 0.37 pJ per cell. In this brief, implementation issues are discussed and simulation results including image processing examples are presented.  相似文献   

17.
Multidimensional sensors, such as digital camera sensors in the visual sensor networks VSNs generate a huge amount of information compared with the scalar sensors in the wireless sensor networks WSNs. Processing and transmitting such data from low power sensor nodes is a challenging issue through their limited computational and restricted bandwidth requirements in a hardware constrained environment. Source coding can be used to reduce the size of vision data collected by the sensor nodes before sending it to its destination. With image compression, a more efficient method of processing and transmission can be obtained by removing the redundant information from the captured image raw data. In this paper, a survey of the main types of the conventional state of the art image compression standards such as JPEG and JPEG2000 is provided. A literature review of their advantages and shortcomings of the application of these algorithms in the VSN hardware environment is specified. Moreover, the main factors influencing the design of compression algorithms in the context of VSN are presented. The selected compression algorithm may have some hardware-oriented properties such as; simplicity in coding, low memory need, low computational load, and high-compression rate. In this survey paper, an energy efficient hardware based image compression is highly requested to counter the severe hardware constraints in the WSNs.  相似文献   

18.
This paper considers the properties a multirobot system should exhibit to perform an assigned task cooperatively. Our experiments regard specifically the domain of RoboCup middle-size league (MSL) competitions. But the illustrated techniques can be usefully applied also to other service robotics fields like, for example, videosurveillance. Two issues are addressed in the paper. The former refers to the problem of dynamic role assignment in a team of robots. The latter concerns the problem of sharing the sensory information to cooperatively track moving objects. Both these problems have been extensively investigated over the past years by the MSL robot teams. In our paper, each individual robot has been designed to become reactively aware of the environment configuration. In addition, a dynamic role assignment policy among teammates is activated, based on the knowledge about the best behavior that the team is able to acquire through the shared sensorial information. We present the successful performance of the Artisti Veneti robot team at the MSL Challenge competitions of RoboCup-2003 to show the effectiveness of our proposed hybrid architecture, as well as some tests run in laboratory to validate the omnidirectional distributed vision system which allows us to share the information gathered by the omnidirectional cameras of our robots.  相似文献   

19.
一种基于平面标靶的线结构光视觉传感器标定方法   总被引:3,自引:3,他引:0  
提出了一种适合于现场的线结构光视觉传感器标定 方法。建立了标定的数学模型,设计了一种平面点阵标靶,提出了坐标映射方法。根据结构 光条纹特征点的图像坐标和对应在标靶坐标系下的坐标,以及 相机内参,计算出标靶坐标系到摄像机坐标系的转移矩阵,再由转移矩阵得到特征点在摄像 机坐标系下的 坐标;在视场范围内,平面标靶按不同位姿摆放多次,获取投射在标靶上的所有特征点,对 这些特征点进 行平面拟合,得到结构光平面在摄像机坐标系下的方程。对标定的精度进行了验证,实验表 明,本文方法标定 过程简单,精度较高,适合于结构光传感器的现场标定。  相似文献   

20.
针对高速动态的气液两相流动对象,基于双目体视原理,采用单台高速摄像机和反射镜组,对虚拟立体视觉传感器进行了优化设计;对气泡发生装置中竖直向上的气泡特征参数进行三维测量。建立虚拟立体视觉传感器三维测量模型,综合考虑实际视场、传感器结构和测量误差等因素,通过结构参数对3方面性能影响的仿真分析,最终确定传感器的结构参数。实验结果表明,传感器测量空间距离误差优于0.14 mm,相对误差优于0.49%,适于气液两相流动态测量,可以实现气泡运动的三维重建。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号