首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 250 毫秒
1.
汤一平  姜荣剑  林璐璐 《计算机科学》2015,42(3):284-288, 315
针对现有的移动机器人视觉系统计算资源消耗大、实时性能欠佳、检测范围受限等问题,提出一种基于主动式全景视觉传感器(AODVS)的移动机器人障碍物检测方法。首先,将单视点的全方位视觉传感器(ODVS)和由配置在1个平面上的4个红色线激光组合而成的面激光发生器进行集成,通过主动全景视觉对移动机器人周边障碍物进行检测;其次,移动机器人中的全景智能感知模块根据面激光发生器投射到周边障碍物上的激光信息,通过视觉处理方法解析出移动机器人周边障碍物的距离和方位等信息;最后,基于上述信息采用一种全方位避障策略,实现移动机器人的快速避障。实验结果表明,基于AODVS的障碍物检测方法能在实现快速高效避障的同时,降低对移动机器人的计算资源的要求。  相似文献   

2.
对于移动机器人单目视觉避障导航问题,研究了室内环境中多障碍物目标图像分割与目标定位。提出一种将HSI彩色图像空间序列分割与Otsu法选取阈值相结合的图像分割方法,并采用基于亮度均值的幂次变换方法改进亮度空间的对比度,从背景环境中分割提取出多个目标所在区域的像素坐标。基于透视投影原理,应用目标定位的几何方法得到目标的空间坐标。该方法在Pioneer-2移动机器人平台上进行了实验,论证了所提出方法的实用性和有效性。  相似文献   

3.
基于全方位计算机视觉的盗窃事件检测   总被引:1,自引:0,他引:1  
为了实现公共场所的安防监控智能化,结合全方位视觉传感器(ODVS)、动态图像处理等技术设计出一种盗窃事件检测系统。首先,通过ODVS来获得360°无死角、大范围的全景视频防盗检测区域;其次,提出了一种基于两个不同画面更新率的混合高斯模型进行背景建模的动态图像处理方法来获取特殊背景对象,同时还能区分场景内的运动对象和纯背景对象;将被盗窃的物体作为特殊背景对象进行检测。实验结果表明,该盗窃事件检测系统具有检测范围广、检测精度高、鲁棒性好和实时性高等优点。  相似文献   

4.
针对移动机器人双目视觉障碍物检测的实时性和准确性两大难题,提出了一种适用于室内复杂背景的障碍物检测方法。通过Retinex图像增强操作消除环境光照不均匀的影响;运用扫描线种子填充算法分割图像中的路面、背景和障碍物;通过二值化处理、边缘检测提取障碍物的轮廓信息和位置信息。在AS-RE轮式机器人平台上进行实验,实验结果表明,机器人能够在室内环境中稳定地实现自主避障功能,验证了提出的双目视觉障碍物检测算法的可行性。  相似文献   

5.
现有的独居老人监护系统存在算法复杂度高、监护效率低、不能保护老人日常隐私等问题。为此,提出一种基于全方位视觉的独居老人监护系统。利用全方位视觉传感器(ODVS)获取老人的全景视频图像,设计运动历史/能量图像算法用于目标跟踪,根据ODVS的成像特点,采用与ODVS距离位置不同的人体模型实现姿态识别,建立家庭空间与环境要素之间的映射关联,以提高行为检测的鲁棒性和可靠性,并通过ODVS标定和人体对象跟踪获得老人的活动量信息。实验结果表明,该系统的鲁棒性和实时性较强,检测准确率较高,能满足独居老人监护的需要。  相似文献   

6.
探讨了利用全方位视觉传感器(ODVS)以及计算机视觉等技术来实现对交岔路口的车辆违章智能视频监控,采用ODVS来获取整个交岔路口的全景视频图像;通过混合高斯模型提取在全景视频图像中的车辆对象前景;使用camshift跟踪算法跟踪行驶车辆对象;依据交通法规,通过对比红绿灯、车道和车辆运动等状态来判断车辆对象是否有违章行为;实验结果表明,所设计的违章检测系统能自动检测出多种车辆违章行为,为交通执法部门提供了一种智能化的检测手段。  相似文献   

7.
为提高室内移动机器人障碍物检测能力,提出了一套基于单目视觉的检测方案。该方案首先对拍摄的图像进行色度、饱和度、亮度(HSI)颜色空间转换;然后,针对室内图像中目标和背景的分割,提出了小目标阈值选取法,提高了特定环境下图像分割的准确性;最后,用目标场景匹配法和目标投影匹配法相结合,计算分割后目标像素的变化和投影的变化,从而判别出目标是具有高度的障碍物还是地面图形。实验结果表明该方案的有效性和可行性,可为室内小型移动机器人提供良好的导航信息。  相似文献   

8.
随着计算机图像处理能力和技术的发展,视觉传感器在移动机器人导航和障碍物识别中的应用越来越受到重视.将AdaBoost算法用于智能轮椅的障碍物识别,在Visual C++6.0平台下,用AdaBoost算法训练得到用于障碍物检测的强分类器,然后利用该分类器进行检测出目标障碍物,并用模糊神经网络的方法对轮椅的声纳信息,视觉...  相似文献   

9.
移动机器人武器是现代高科技在战争中的具体应用,其目标跟踪系统与其它武器目标跟踪系统相比有其自身的特点和不同.本文以机器视觉为理论基础,利用小波变换对武器系统采集到的序列图像进行去噪处理,提出了复杂背景下运动目标检测和跟踪的方法,为移动机器人武器目标跟踪系统的研究和设计提供了理论依据.  相似文献   

10.
张时进 《信息与电脑》2023,(11):195-197
由于现有机器人避障方法绕过障碍物不能及时达到原点,研究了基于深度强化学习的红外单目摄像头移动机器人避障方法。在神经网络中,设计方法通过卷积遍历整个图像区域进行特征学习,在池化层去除冗余特征信息,将图像输入障碍物检测网络检测,生成避障场景下的深度图,运用红外单目摄像头及视觉传感器采集图像中的信息进行训练,实现避障任务。实验结果表明,在不同行驶环境下,3组移动机器人绕过障碍物后均能准确到达原点(0,0)位置。  相似文献   

11.
A mobile platform mounted with omnidirectional vision sensor (ODVS) can be used to monitor large areas and detect interesting events such as independently moving persons and vehicles. To avoid false alarms due to extraneous features, the image motion induced by the moving platform should be compensated. This paper describes a formulation and application of parametric egomotion compensation for an ODVS. Omni images give 360 view of surroundings but undergo considerable image distortion. To account for these distortions, the parametric planar motion model is integrated with the transformations into omni image space. Prior knowledge of approximate camera calibration and camera speed is integrated with the estimation process using a Bayesian approach. Iterative, coarse-to-fine, gradient-based estimation is used to correct the motion parameters for vibrations and other inaccuracies in prior knowledge. Experiments with a camera mounted on various types of mobile platforms demonstrate successful detection of moving persons and vehicles.Published online: 11 October 2004  相似文献   

12.
动态目标检测与目标跟踪是图像领域的热点研究问题,为研究其在移动机器人领域的应用价值,设计了六足机器人动态目标检测与跟踪系统。针对非刚体运动目标容易被检测为多个分散区域的问题提出区域合并算法,并通过对称匹配、自适应外点滤除对运动背景进行精确补偿,最终基于背景补偿法实现对运动目标的精确检测。研究了基于KCF(Kernel Correlation Filter)的目标跟踪算法在六足机器人平台上的应用,设计了自适应跟踪算法实现六足机器人对运动目标的角度跟踪。将运动目标检测及跟踪算法应用于六足机器人系统。实验表明,在六足机器人移动过程中,系统可对运动目标进行精确检测与跟踪。  相似文献   

13.
机器人轨迹节点跟踪比较难,导致机器人实际轨迹偏离期望轨迹,所以设计基于视觉图像的全向移动机器人轨迹跟踪控制方法;构建全向移动机器人的运动学数学模型,以此确定机器人移动轨迹数学模型;以移动轨迹数学模型为基础,按照视觉图像划分标准对全向移动机器人运动图像的分割,通过分离目标节点的方式提取运动学特征参量,完成机器人轨迹节点跟踪处理;结合节点跟踪处理结果,将运动学不等式与误差向量作为机器人轨迹跟踪控制的约束条件,利用滑模变结构搭建轨迹跟踪控制模型,实现全向移动机器人轨迹跟踪控制;对比实验结果表明,所设计的方法应用后,全向移动机器人角速度曲线、线速度曲线与期望运动轨迹曲线之间的贴合程度均超过90%,满足全向移动机器人轨迹跟踪控制要求。  相似文献   

14.
Optimal representative blocks are proposed for an efficient tracking of a moving object and it is verified experimentally by using a mobile robot with a pan‐tilt camera. The key idea comes from the fact that when the image size of a moving object is shrunk in an image frame according to the distance between the camera of mobile robot and the moving object, the tracking performance of a moving object can be improved by shrinking the size of representative blocks according to the object image size. Motion estimation using edge detection (ED) and block‐matching algorithm (BMA) are often used in the case of moving object tracking by vision sensors. However, these methods often miss the real‐time vision data since these schemes suffer from the heavy computational load. To overcome this problem and to improve the tracking performance, the optimal representative block that can reduce a lot of data to be computed is defined and optimized by changing the size of the representative block according to the size of object in the image frame. The proposed algorithm is verified experimentally by using a mobile robot with a two degree‐of‐freedom active camera. © 2004 Wiley Periodicals, Inc.  相似文献   

15.
目的 摄像机旋转扫描条件下的动目标检测研究中,传统的线性模型无法解决摄像机旋转扫描运动带来的图像间非线性变换问题,导致图像补偿不准确,在动目标检测时将引起较大误差,造成动目标虚假检测。为解决这一问题,提出了一种面阵摄像机旋转扫描条件下的图像补偿方法,其特点是能够同时实现背景运动补偿和图像非线性变换补偿,从而实现动目标的快速可靠检测。方法 首先进行图像匹配,然后建立摄像机旋转扫描非线性模型,通过参数空间变换将其转化为线性求解问题,采用Hough变换实现该方程参数的快速鲁棒估计。解决摄像机旋转扫描条件下获取的图像间非线性变换问题,从而实现图像准确补偿。在此基础上,可以利用帧间差分等方法检测出运动目标。结果 实验结果表明,在摄像机旋转扫描条件下,本文方法能够同时实现图像间的背景运动补偿和非线性变换补偿,可以去除大部分由于立体视差效应(parallax effects)产生的匹配错误。并且在实验中,本文方法处理速度可以达到50帧/s,满足实时性要求。结论 在面阵摄像机旋转扫描的条件下,相比于传统的基于线性模型的图像补偿方法,本文方法能够快速、准确地在背景补偿的基础上同时解决图像间非线性变换问题,从而更好地提取出运动目标,具有一定的实用价值。  相似文献   

16.
Robust topological navigation strategy for omnidirectional mobile robot using an omnidirectional camera is described. The navigation system is composed of on-line and off-line stages. During the off-line learning stage, the robot performs paths based on motion model about omnidirectional motion structure and records a set of ordered key images from omnidirectional camera. From this sequence a topological map is built based on the probabilistic technique and the loop closure detection algorithm, which can deal with the perceptual aliasing problem in mapping process. Each topological node provides a set of omnidirectional images characterized by geometrical affine and scale invariant keypoints combined with GPU implementation. Given a topological node as a target, the robot navigation mission is a concatenation of topological node subsets. In the on-line navigation stage, the robot hierarchical localizes itself to the most likely node through the robust probability distribution global localization algorithm, and estimates the relative robot pose in topological node with an effective solution to the classical five-point relative pose estimation algorithm. Then the robot is controlled by a vision based control law adapted to omnidirectional cameras to follow the visual path. Experiment results carried out with a real robot in an indoor environment show the performance of the proposed method.  相似文献   

17.
提出一种慢运动背景视频序列下基于帧间背景图像匹配的运动目标检测和提取算法。该算法首先使用仿射变换模型来描述慢运动背景图像的运动变化,并使用基于光流约束方法求解该仿射变换模型参数,实现了相邻帧间图像的背景匹配;其次,采用背景匹配后的两帧图像差进行目标检测,使用自适应二值化区分变化与未变化区域;最后,使用形态学等图像算法进行后处理提取运动目标。算法经实验证明,在背景慢运动情况下可以有效地提取出运动目标。  相似文献   

18.
《Advanced Robotics》2013,27(6):737-762
Latest advances in hardware technology and state-of-the-art of mobile robots and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. A mobile service robot requires the perception of its present position to co-exist with humans and support humans effectively in populated environments. To realize this, a robot needs to keep track of relevant changes in the environment. This paper proposes localization of a mobile robot using images recognized by distributed intelligent networked devices in intelligent space (ISpace) in order to achieve these goals. This scheme combines data from the observed position, using dead-reckoning sensors, and the estimated position, using images of moving objects, such as a walking human captured by a camera system, to determine the location of a mobile robot. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the ISpace. Using the a priori known path of a moving object and a perspective camera model, the geometric constraint equations that represent the relation between image frame coordinates for a moving object and the estimated robot's position are derived. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot, and the Kalman filtering scheme is used for the estimation of the mobile robot location. The proposed approach is applied for a mobile robot in ISpace to show the reduction of uncertainty in determining the location of a mobile robot, and its performance is verified by computer simulation and experiment.  相似文献   

19.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号