共查询到20条相似文献,搜索用时 31 毫秒
1.
Julio C. Sosa Jose A. Boluda Fernando Pardo Rocío Gómez-Fabela 《Journal of Real-Time Image Processing》2007,2(4):259-270
Optical flow computation has been extensively used for motion estimation of objects in image sequences. The results obtained
by most optical flow techniques are computationally intensive due to the large amount of data involved. A new change-based
data flow pipelined architecture has been developed implementing the Horn and Schunk smoothness constraint; pixels of the
image sequence that significantly change, fire the execution of the operations related to the image processing algorithm.
This strategy reduces the data and, combined with the custom hardware implemented, it achieves a significant optical flow
computation speed-up with no loss of accuracy. This paper presents the bases of the change-driven data flow image processing
strategy, as well as the implementation of custom hardware developed using an Altera Stratix PCI development board.
Julio C. Sosa received the degree in electronic engineering in 1997 from the Instituto Tecnológico de Lázaro Cárdenas, México, the M.Sc. degree in electrical engineering in 2000 from the Centro de Investigacón y de Estudios Avanzadosthen del I.P.N., México and he is candidate to Ph.D. by University of Valencia, Spain. Currently he is associate professor at the Postgrade Department, the Escuela Superior de Cómputo—I.P.N. México. His research interests include hardware architectures, artificial intelligence and microelectronic. Jose A. Boluda was born in Xàtiva (Spain) in 1969. He graduated in physics (1992) and received his Ph.D. (2000) in physics, both at the University of Valencia. From 1993, he was with the electronics and computer science department of the University of Valencia, Spain, where he collaborated in several projects related to ASIC design and image processing. He has been a visiting researcher with the Department of Electrical Engineering at the University of Virginia, USA and the Department of Applied Informatics at the University of Macedonia, Greece. He is currently Titular Professor in the Department of Informatics at the University of Valencia. His research interests include reconfigurable systems, VHDL hardware design, programmable logic synthesis and sensor design. Fernando Pardo received the M.S. degree in physics from the University of Valencia, Valencia, Spain in 1991, and the Ph.D. in computer engineering from the University of Valencia, Valencia, Spain in 1997. From 1991 to 1993, he was with the Electronics and Computer Science department of the University of Valencia, Spain, where he collaborated in several research projects. In 1994 he was with the Integrated Laboratory for Advanced Robotics at the University of Genoa, Italy, where he worked on space-variant image processing. In 1994 he joined IMEC (Interuniversitary Micro-Electronics Centre), Belgium, where he worked on projects related to CMOS space-variant image sensors. In 1995 he joined the University of Valencia, Spain, where he is currently Associate Professor and the Head of the Computer Engineering Department. He is currently leading several projects regarding architectures for high-speed image processing and bio-inspired image sensors. Rocío Gómez-Fabela was born in México City in 1979. She received the Computer Engineering degree in 2001 from Escuela Superior de Cómputo, México. She is currently studying towards the Ph.D. in the Department of Informatics, University of Valencia, Spain. Her current research interests are softcomputing, reconfigurable systems and VHDL hardware design. 相似文献
Rocío Gómez-FabelaEmail: |
Julio C. Sosa received the degree in electronic engineering in 1997 from the Instituto Tecnológico de Lázaro Cárdenas, México, the M.Sc. degree in electrical engineering in 2000 from the Centro de Investigacón y de Estudios Avanzadosthen del I.P.N., México and he is candidate to Ph.D. by University of Valencia, Spain. Currently he is associate professor at the Postgrade Department, the Escuela Superior de Cómputo—I.P.N. México. His research interests include hardware architectures, artificial intelligence and microelectronic. Jose A. Boluda was born in Xàtiva (Spain) in 1969. He graduated in physics (1992) and received his Ph.D. (2000) in physics, both at the University of Valencia. From 1993, he was with the electronics and computer science department of the University of Valencia, Spain, where he collaborated in several projects related to ASIC design and image processing. He has been a visiting researcher with the Department of Electrical Engineering at the University of Virginia, USA and the Department of Applied Informatics at the University of Macedonia, Greece. He is currently Titular Professor in the Department of Informatics at the University of Valencia. His research interests include reconfigurable systems, VHDL hardware design, programmable logic synthesis and sensor design. Fernando Pardo received the M.S. degree in physics from the University of Valencia, Valencia, Spain in 1991, and the Ph.D. in computer engineering from the University of Valencia, Valencia, Spain in 1997. From 1991 to 1993, he was with the Electronics and Computer Science department of the University of Valencia, Spain, where he collaborated in several research projects. In 1994 he was with the Integrated Laboratory for Advanced Robotics at the University of Genoa, Italy, where he worked on space-variant image processing. In 1994 he joined IMEC (Interuniversitary Micro-Electronics Centre), Belgium, where he worked on projects related to CMOS space-variant image sensors. In 1995 he joined the University of Valencia, Spain, where he is currently Associate Professor and the Head of the Computer Engineering Department. He is currently leading several projects regarding architectures for high-speed image processing and bio-inspired image sensors. Rocío Gómez-Fabela was born in México City in 1979. She received the Computer Engineering degree in 2001 from Escuela Superior de Cómputo, México. She is currently studying towards the Ph.D. in the Department of Informatics, University of Valencia, Spain. Her current research interests are softcomputing, reconfigurable systems and VHDL hardware design. 相似文献
2.
基于视觉增强现实系统的设计与实现 总被引:7,自引:0,他引:7
针对目前增强现实领域尚少有人涉足的运动跟踪注册问题,提出了采用四个彩色标志点的光流场估计运动参数,结合刚体的运动特性以及投影透射模型确定运动物体与摄像机间相对位姿的算法。并把该算法用于以光学透射式头盔显示器为主的增强现实系统中。该系统结构简单、轻便、实用性强、易于实现。一般情况下只需四个平面标志物就可实现运动物体的三维跟踪注册;工作范围大,甚至可以应用到室外的增强现实系统中;数值求解过程是线性过程,误差小,可以满足增强现实系统高精度三维注册的要求。 相似文献
3.
Chin-Hung Teng Shang-Hong Lai Yung-Sheng Chen Wen-Hsing Hsu 《Computer Vision and Image Understanding》2005,97(3):315-346
In this paper, we present a very accurate algorithm for computing optical flow with non-uniform brightness variations. The proposed algorithm is based on a generalized dynamic image model (GDIM) in conjunction with a regularization framework to cope with the problem of non-uniform brightness variations. To alleviate flow constraint errors due to image aliasing and noise, we employ a reweighted least-squares method to suppress unreliable flow constraints, thus leading to robust estimation of optical flow. In addition, a dynamic smoothness adjustment scheme is proposed to efficiently suppress the smoothness constraint in the vicinity of the motion and brightness variation discontinuities, thereby preserving motion boundaries. We also employ a constraint refinement scheme, which aims at reducing the approximation errors in the first-order differential flow equation, to refine the optical flow estimation especially for large image motions. To efficiently minimize the resulting energy function for optical flow computation, we utilize an incomplete Cholesky preconditioned conjugate gradient algorithm to solve the large linear system. Experimental results on some synthetic and real image sequences show that the proposed algorithm compares favorably to most existing techniques reported in literature in terms of accuracy in optical flow computation with 100% density. 相似文献
4.
5.
H. Luo 《Microsystem Technologies》2006,12(4):324-329
This paper focuses on tracking, reconstruction and motion estimation of a well-defined MEMS optical switch from a microscopic
view. For out-of-view reconstruction, a homography capable of transforming feature points and feature lines between a microscopic
image and a CAD model of the switch is implemented. The homography between two sequential microscopic images is decomposed
and factorized for motion estimation. Optical flow has also been explored to provide rough estimations of rotation centre
and angle. The paper also illustrates motion parameter optimization principles to deal with uncertainty inherent in micro
world. After non-linear optimization, estimation accuracy for rotation angle and rotation centre can reach 0.06° and pixel
level, respectively. 相似文献
6.
The computation of optical flow within an image sequence is one of the most widely used techniques in computer vision. In this paper, we present a new approach to estimate the velocity field for motion-compensated compression. It is derived by a nonlinear system using the direct temporal integral of the brightness conservation constraint equation or the Displaced Frame Difference (DFD) equation. To solve the nonlinear system of equations, an adaptive framework is used, which employs velocity field modeling, a nonlinear least-squares model, Gauss–Newton and Levenberg–Marquardt techniques, and an algorithm of the progressive relaxation of the over-constraint. The three criteria by which successful motion-compensated compression is judged are 1.) The fidelity with which the estimated optical flow matches the ground truth motion, 2.) The relative absence of artifacts and “dirty window” effects for frame interpolation, and 3.) The cost to code the motion vector field. We base our estimated flow field on a single minimized target function, which leads to motion-compensated predictions without incurring penalties in any of these three criteria. In particular, we compare our proposed algorithm results with those from Block-Matching Algorithms (BMA), and show that with nearly the same number of displacement vectors per fixed block size, the performance of our algorithm exceeds that of BMA in all the three above points. We also test the algorithm on synthetic and natural image sequences, and use it to demonstrate applications for motion-compensated compression. 相似文献
7.
P.-C. Chung Author Vitae C.-L. Huang Author Vitae E.-L. Chen Author Vitae 《Pattern recognition》2007,40(3):1066-1077
Motion vector plays one significant feature in moving object segmentation. However, the motion vector in this application is required to represent the actual motion displacement, rather than regions of visually significant similarity. In this paper, region-based selective optical flow back-projection (RSOFB) which back-projects optical flows in a region to restore the region's motion vector from gradient-based optical flows, is proposed to obtain genuine motion displacement. The back-projection is performed based on minimizing the projection mean square errors of the motion vector on gradient directions. As optical flows of various magnitudes and directions provide various degrees of reliability in the genuine motion restoration, the optical flows to be used in the RSOFB are optimally selected based on their sensitivity to noises and their tendency in causing motion estimation errors. In this paper a deterministic solution is also derived for performing the minimization and obtaining the genuine motion magnitude and motion direction. 相似文献
8.
Masayuki Fukumoto Takehito Ogata Joo Kooi Tan Hyoung Seop Kim Seiji Ishikawa 《Artificial Life and Robotics》2008,13(1):326-330
In this paper, we describe a technique for representing and recognizing human motions using directional motion history images.
A motion history image is a single human motion image produced by superposing binarized successive motion image frames so
that older frames may have smaller weights. It has, however, difficulty that the latest motion overwrites older motions, resulting
in inexact motion representation and therefore incorrect recognition. To overcome this difficulty, we propose directional
motion history images which describe a motion with respect to four directions of movement, i.e. up, down, right and left, employing optical flow. The directional motion history images are thus a set of four motion history
images defined on four optical flow images. Experimental results show that the proposed technique achieves better performance
in the recognition of human motions than the existent motion history images.
This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January
31–February 2, 2008 相似文献
9.
10.
Yasushi Yagi Wataru Nishi Nels Benson Masahiko Yachida 《Machine Vision and Applications》2003,14(2):112-120
Described here is a method for estimating rolling and swaying motions of a mobile robot using optical flow. We have proposed
an image sensor with a hyperboloidal mirror for the vision-based navigation of a mobile robot. Its name is HyperOmni Vision.
The radial component of optical flow in HyperOmni Vision has a periodic characteristic. The circumferential component of optical
flow has a symmetric characteristic. The proposed method makes use of these characteristic to estimate robustly the rolling
and swaying motion of the mobile robot.
Correspondence to: Y. Yagi e-mail: y-yagi@sys.es.osaka-u.ac.jp 相似文献
11.
群体骚乱行为对社会公共安全的危害极大,是智能视频监控防范的重点之一。针对现有群体骚乱行为检测算法运算效率和检测正确率均较低的问题,提出了一种基于群组运动模式变化分析的行为检测算法。该方法提取前景像素点的光流特征作为行为分析的依据,采用K均值聚类和贝叶斯准则实现场景中不同人群的群组划分。在此基础上,分析场景中所有群组的运动模式变化,构建最大变化因子,计算最大变化因子变化量,检测群体骚乱行为。实验结果表明,采用所提方法检测群体骚乱行为的虚警率和漏警率均较低,平均检测耗时短。 相似文献
12.
当H.263编码视频流在Internet上传输时,很容易受到信道错误的影响而丢失数据,由于数据的丢失不但会影响当前帧,还会连续传递到以后的解码帧,而导致图象质量的严重恶化,因此必须采用一定的措施来消除这种影响。目前较常用的错误掩盖算法是时域掩盖算法,而时域掩盖算法是利用参考帧来恢复当前帧损坏的图象数据,其计算较复杂,为此,提出了一种基于块匹配原则的时域掩盖算法,同时用三步搜索代替完全搜索来降低算法的计算复杂性,模拟结果显示,该算法由于能够在很短的处理时间内,获得较好质量的图象,因此能适应于视频会议等实时应用的要求。 相似文献
13.
Yasuaki Sakai Makoto Miyoshi Joo Kooi Tan Seiji Ishikawa 《Artificial Life and Robotics》2008,13(1):302-305
This paper describes a technique for extracting moving objects from a video image sequence taken by a slowly moving camera
as well as a fixed camera. The background subtraction method is effective for extracting moving objects from a video. But
the latest background image should be employed for the subtraction in the mobile camera case and in order not to be influenced
by the light intensity change. A temporal median technique is proposed in this paper which detects the background at every
moment. The camera motion is estimated using a local correlation map and the temporal median filter is applied to the common
image area among a set of successive image frames to extract the background. The technique was applied to the video images
obtained at a junction from a hand-held camera and those taken at a pedestrians crossing by a camera fixed in a car and successfully
detected pedestrians.
This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January
31–February 2, 2008 相似文献
14.
基于光流的运动目标实时检测方法研究 总被引:6,自引:0,他引:6
运动目标实时检测是目标自动检测、识别与跟踪的关键技术之一。该文在分析光流模型约束条件的基础上,提出了一种鲁棒的多分辨率光流估计方法,并对光流应用于目标检测的一些实际问题进行了探讨。实验结果表明,该算法具有很强的鲁棒性和适应性。 相似文献
15.
In an infrared surveillance system (which must detect remote sources and thus has a very low resolution) in an aerospace environment, the estimation of the cloudy sky velocity should lower the false alarm rate in discriminating the motion between various moving shapes by means of a background velocity map. The optical flow constraint equation, based on a Taylor expansion of the intensity function, is often used to estimate the motion for each pixel. One of the main problems in motion estimation is that, for one pixel, the real velocity cannot be found because of the aperture problem. Another kinematic estimation method is based on a matched filter [generalized Hough transform (GHT)]: it gives a global velocity estimation for a set of pixels. On the one hand we obtain a local velocity estimation for each pixel with little credibility because the optical flow is so sensitivity to noise; on the other hand, we obtain a robust global kinematic estimation, the same for all selected pixels. This paper aims to adapt and improve the GHT in our typical application in which one must discern the global movement of objects (clouds), whatever their form may be (clouds with hazy edges or distorted shapes or even clouds that have very little structure). We propose an improvement of the GHT algorithm by segmentation images with polar constraints on spatial gradients. One pixel, at timet, is matched with another one at timet + T, only if the direction and modulus of the gradient are similar. This technique, which is very efficient, sharpens the peak and improves the motion resolution. Each of these estimations is calculated within windows belonging to the image, these windows being selected by means of an entropy criterion. The kinematic vector is computed accurately by means of the optical flow constraint equation applied on the displaced window. We showed that, for small displacements, the optical flow constraint equation sharpens the results of the GHT. Thus a semi-dense velocity field is obtained for cloud edges. A velocity map computed on real sequences with these methods is shown. In this way, a kinematic parameter discriminates between a target and the cloudy background. 相似文献
16.
A mobile platform mounted with omnidirectional vision sensor (ODVS) can be used to monitor large areas and detect interesting events such as independently moving persons and vehicles. To avoid false alarms due to extraneous features, the image motion induced by the moving platform should be compensated. This paper describes a formulation and application of parametric egomotion compensation for an ODVS. Omni images give 360
view of surroundings but undergo considerable image distortion. To account for these distortions, the parametric planar motion model is integrated with the transformations into omni image space. Prior knowledge of approximate camera calibration and camera speed is integrated with the estimation process using a Bayesian approach. Iterative, coarse-to-fine, gradient-based estimation is used to correct the motion parameters for vibrations and other inaccuracies in prior knowledge. Experiments with a camera mounted on various types of mobile platforms demonstrate successful detection of moving persons and vehicles.Published online: 11 October 2004 相似文献
17.
Humaira Nisar Author Vitae Author Vitae 《Pattern recognition》2009,42(3):475-61
A novel, computationally efficient and robust scheme for multiple initial point prediction has been proposed in this paper. A combination of spatial and temporal predictors has been used for initial motion vector prediction, determination of magnitude and direction of motion and search pattern selection. Initially three predictors from the spatio-temporal neighboring blocks are selected. If all these predictors point to the same quadrant then a simple search pattern based on the direction and magnitude of the predicted motion vector is selected. However if the predictors belong to different quadrants then we start the search from multiple initial points to get a clear idea of the location of minimum point. We have also defined local minimum elimination criteria to avoid being trapped in local minimum. In this case multiple rood search patterns are selected. The predictive search center is closer to the global minimum and thus decreases the effect of monotonic error surface assumption and its impact on the motion field. Its additional advantage is that it moves the search closer to the global minimum hence increases the computation speed. Further computational speed up has been obtained by considering the zero-motion threshold for no motion blocks. The image quality measured in terms of PSNR also shows good results. 相似文献
18.
A combined 2D, 3D approach is presented that allows for robust tracking of moving people and recognition of actions. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. Heading-guided recognition (HGR) is proposed as an efficient method for adaptive classification of activity. The HGR approach is demonstrated using “motion history images” that are then recognized via a mixture-of-Gaussians classifier. The system is tested in recognizing various dynamic human outdoor activities: running, walking, roller blading, and cycling. In addition, experiments with real and synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise. 相似文献
19.
为更好利用输入视频的时域特征,提升异常行为检测精度,采用三维自编码器为主体的网络分支编解码视频的时空域信息,提出改进光流融合策略的时域分支提供额外时域信息.将双分支结果融合并计算重建误差,在此基础上进行异常行为的判断.针对目前像素评价指标的不足,提出一种改进的像素级别检测指标.结果表明,融合后的结果好于各分支单独的结果... 相似文献
20.
由于运动摄像机的存在使得复杂背蒂下的运动目标检测问题更加复杂,根据场景中目标与背景具有不同的运动、任意场景可以分成不同的运动区域这一基拳事实,提出一种新的基于RBF神经网络的运动目标检测算法。运动补偿后求参考帧与补偿后的当前帧之间的光流,联合当前像素坐标及其灰度值得到五雏特征向量作为RBF网络的输入,RBF网络学习算法通过最小化由Bayesian理论和能量最小化理论导出的损失函数实现。学习矢量量化方法修正网络的中心,收敛后网络的输出就是运动目标区域。试验结果证明了算法的有效性。 相似文献