首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One of the major areas in research on dynamic scene analysis is recovering 3-D motion and structure from optical flow information. Two problems which may arise due to the presence of noise in the flow field are examined. First, motion parameters of the sensor or a rigidly moving object may be extremely difficult to estimate because there may exist a large set of significantly incorrect solutions which induce flow fields similar to the correct one. The second problem is in the decomposition of the environment into independently moving objects. Two such objects may induce optical flows which are compatible with the same motion parameters, and hence, there is no way to refute the hypothesis that these flows are generated by one rigid object. These ambiguities are inherent in the sense that they are algorithm-independent. Using a mathematical analysis, situations where these problems are likely to arise are characterized. A few examples demonstrate the conclusions. Constraints and parameters which can be recovered even in ambiguous situations are presented  相似文献   

2.
We present an algorithm for identifying and tracking independently moving rigid objects from optical flow. Some previous attempts at segmentation via optical flow have focused on finding discontinuities in the flow field. While discontinuities do indicate a change in scene depth, they do not in general signal a boundary between two separate objects. The proposed method uses the fact that each independently moving object has a unique epipolar constraint associated with its motion. Thus motion discontinuities based on self-occlusion can be distinguished from those due to separate objects. The use of epipolar geometry allows for the determination of individual motion parameters for each object as well as the recovery of relative depth for each point on the object. The algorithm assumes an affine camera where perspective effects are limited to changes in overall scale. No camera calibration parameters are required. A Kalman filter based approach is used for tracking motion parameters with time  相似文献   

3.
In this paper a new approach to motion analysis from stereo image sequences using unified temporal and spatial optical flow field (UOFF) is reported. That is, based on a four-frame rectangular model and the associated six UOFF field quantities, a set of equations is derived from which both position and velocity can be determined. It does not require feature extraction and correspondence establishment, which are known to be difficult, and only partial solutions suitable for simplistic situations have been developed. Furthermore, it is capable of detecting multiple moving objects even when partial occlusion occurs, and is potentially suitable for nonrigid motion analysis. Unlike the current existing techniques for motion analysis from stereo imagery, the recovered motion by using this new approach is for a whole continuous field instead of only for some features. It is a purely optical flow approach. Two experiments are presented to demonstrate the feasibility of the approach.  相似文献   

4.
We present an integrated method to match multiple features including points, regions, and lines in two perspective images, and simultaneously segment them such that all features in each segment have the same 3D motion. The method uses local affine (first-order) approximation of the displacement field under the assumption of locally rigid motion. Each distinct motion is represented in the image plane by a distinct set of values for six displacement parameters. To compute the values of these parameters, the 6D space is split into two 3D spaces, and each is exhaustively searched coarse-to-fine. This yields two results simultaneously, correspondences between features and segmentation of features into subsets corresponding to locally rigid patches of moving objects. Since matching is based on the 2D approximation of 3D motion, problems due to motion or object boundaries and occlusion can be avoided. Large motion is also handled in a manner unlike the methods based on flow field. Integrated use of the multiple features not only gives a larger number of features (overconstrained system) but also reduces the number of candidate matches for the features, thus making matching less ambiguous. Experimental results are presented for four pairs of real images.  相似文献   

5.
A theory of the motion fields of curves   总被引:6,自引:6,他引:0  
This article reports a study of the motion field generated by moving 3-D curves that are observed by a camera. We first discuss the relationship between optical flow and motion field and show that the assumptions made in the computation of the optical flow are a bit difficult to defend.We then go ahead to study the motion field of a general curve. We first study the general case of a curve moving nonrigidly and introduce the notion of isometric motion. In order to do this, we introduce the notion of spatiotemporal surface and study its differential properties up to the second order. We show that, contrary to what is commonly believed, the full motion field of the curve (i.e., the component tangent to the curve) cannot be recovered from this surface. We also give the equations that characterize the spatio-temporal surface completely up to a rigid transformation. Those equations are the expressions of the first and second fundamental forms and the Gauss and Codazzi-Mainardi equations. We then relate those differential expressions computed on the spatio-temporal surface to quantities that can be computed from the images intensities. The actual values depend upon the choice of the edge detector.We then show that the hypothesis of a rigid 3-D motion allows in general to recover the structure and the motion of the curve, in fact without explicitly computing the tangential motion field, at the cost of introducing the three-dimensional accelerations. We first study the motion field generated by the simplest kind of rigid 3-D curves, namely lines. This study is illuminating in that it paves the way for the study of general rigid curves and because of the useful results which are obtained. We then extend the results obtained in the case of lines to the case of general curves and show that at each point of the image curve two equations can be written relating the kinematic screw of the moving 3-D curve and its time derivative to quantities defined in the study of the general nonrigid motion that can be measured from the spatio-temporal surface and therefore from the image. This shows that the structure and the motion of the curve can be recovered from six image points only, without establishing any point correspondences.Finally we study the cooperation between motion and stereo in the framework of this theory. The use of two cameras instead of one allows us to get rid of the three-dimensional accelerations and the relations between the two spatio-temporal surfaces of the same rigidly moving 3-D curve can be used to help disambiguate stereo correspondences.  相似文献   

6.
Qualitative detection of motion by a moving observer   总被引:2,自引:2,他引:0  
Two complementary methods for the detection of moving objects by a moving observer are described. The first is based on the fact that, in a rigid environment, the projected velocity at any point in the image is constrained to lie on a 1-D locus in velocity space whose parameters depend only on the observer motion. If the observer motion is known, an independently moving object can, in principle, be detected because its projected velocity is unlikely to fall on this locus. We show how this principle can be adapted to use partial information about the motion field and observer motion that can be rapidly computed from real image sequences. The second method utilizes the fact that the apparent motion of a fixed point due to smooth observer motion changes slowly, while the apparent motion of many moving objects such as animals or maneuvering vehicles may change rapidly. The motion field at a given time can thus be used to place constraints on the future motion field which, if violated, indicate the presence of an autonomously maneuvering object. In both cases, the qualitative nature of the constraints allows the methods to be used with the inexact motion information typically available from real-image sequences. Implementations of the methods that run in real time on a parallel pipelined image processing system are described.  相似文献   

7.
In scenes with collectively moving objects, to disregard the individual objects and take the entire group into consideration for motion characterization is a promising approach with wide application prospects. In contrast to studies on the segmentation of independently moving objects, our purpose is to construct a segmentation of these objects to characterize their motions at a macroscopic level. In general, the collectively moving objects in a group have very similar motion behavior with their neighbors and appear as a kind of global collective motion. This paper presents a joint segmentation approach for these collectively moving objects. In our model, we extract these macroscopic movement patterns based on optical flow field sequences. Specifically, a group of collectively moving objects correspond to a region where the optical flow field has high magnitude and high local direction coherence. As a result, our problem can be addressed by identifying these coherent optical flow field regions. The segmentation is performed through the minimization of a variational energy functional derived from the Bayes classification rule. Specifically, we use a bag-of-words model to generate a codebook as a collection of prototypical optical flow patterns, and the class-conditional probability density functions for different regions are determined based on these patterns. Finally, the minimization of our proposed energy functional results in the gradient descent evolution of segmentation boundaries which are implicitly represented through level sets. The application of our proposed approach is to segment and track multiple groups of collectively moving objects in a large variety of real-world scenes.  相似文献   

8.
基于相对形变模型及正则化技术的人体运动估计   总被引:1,自引:0,他引:1       下载免费PDF全文
为了使根据人体行走的单目动态图象序列,对人体手臂及腿部的运动及结构参数进行估计的结果更为可信、更具鲁棒性,提出了一种基于相对形变模型及正则化技术的人体运动估计方法,该方法首先在物体中心坐标的运动表示方式下,通过在刚体运动模型中加入形变系数的方法给出了基于相对形变概念的非刚体运动模型;然后,根据这一非刚体运动模型进行正则化运动及结构参数的估计,再以正则化的形式融入人体运动的先验知识,使运动估计的结果更具鲁棒性,实验结果证明,该方法有效地反映了人体的非刚体运动模式,运动模型中所加入的相对形变系数也一定程度反映了人体的运动规律。  相似文献   

9.
An algorithm for computing three-dimensional (3-D) velocity field and motion parameters from a range image sequence is presented. It is based on a new integral 3-D rigid motion invariant feature-the trace of a 3x3 "feature matrix" related to the moment of inertia tensor. This trace can be computed at every point of a moving surface and provides a quantitative measure of the local shape of the surface. Based on the feature's conservation along the trajectory of every moving point, a 3-D Flow Constraint Equation is formulated and solved for the velocity field. The robustness of the feature in presence of noise and discontinuity is analyzed.  相似文献   

10.
This article presents an innovative method to estimate the motion parameters of a mobile robot equipped with a radial laser range‐finder. Our approach is based on the spatial and temporal linearization of the range function, which leads to a velocity constraint equation for the scanned points. Assuming that the mobile robot moves in a rigid environment, a least‐squares formulation is employed to come up with the motion estimation as well as the motion vectors of the scanned points as they move from scan to scan in the sequence. This motion field can be very useful for a number of applications including detection and tracking of moving objects. Although this is a preliminary work, experimental results show that good results are achieved with both real and synthetic data. ©1999 John Wiley & Sons, Inc.  相似文献   

11.
This study investigates a variational, active curve evolution method for dense three-dimensional (3D) segmentation and interpretation of optical flow in an image sequence of a scene containing moving rigid objects viewed by a possibly moving camera. This method jointly performs 3D motion segmentation, 3D interpretation (recovery of 3D structure and motion), and optical flow estimation. The objective functional contains two data terms for each segmentation region, one based on the motion-only equation which relates the essential parameters of 3D rigid body motion to optical flow, and the other on the Horn and Schunck optical flow constraint. It also contains two regularization terms for each region, one for optical flow, the other for the region boundary. The necessary conditions for a minimum of the functional result in concurrent 3D-motion segmentation, by active curve evolution via level sets, and linear estimation of each region essential parameters and optical flow. Subsequently, the screw of 3D motion and regularized relative depth are recovered analytically for each region from the estimated essential parameters and optical flow. Examples are provided which verify the method and its implementation  相似文献   

12.
In this paper, the accurate method for texture reconstruction with non-desirable moving objects into dynamic scenes is proposed. This task is concerned to editor off-line functions, and the main criteria are the accuracy and visibility of the reconstructed results. The method is based on a spatio-temporal analysis and includes two stages. The first stage uses a feature points tracking to locate the rigid objects accurately under the assumption of their affine motion model. The second stage involves the accurate reconstruction of video sequence based on texture maps of smoothness, structural properties, and isotropy. These parameters are estimated by three separate neural networks of a back propagation. The background reconstruction is realized by a tile method using a single texton, a line, or a field of textons. The proposed technique was tested into reconstructed regions with a frame area up to 8–20%. The experimental results demonstrate more accurate inpainting owing to the improved motion estimations and the modified texture parameters.  相似文献   

13.
In this paper we explore a multiple hypothesis approach to estimating rigid motion from a moving stereo rig. More precisely, we introduce the use of Gaussian mixtures to model correspondence uncertainties for disparity and image velocity estimation. We show some properties of the disparity space and show how rigid transformations can be represented. An algorithm derived from standard random sampling-based robust estimators, that efficiently estimates rigid transformations from multi-hypothesis disparity maps and velocity fields is given.  相似文献   

14.
单目人体图像序列的运动及结构参数估计   总被引:1,自引:0,他引:1  
汪亚明  汪元美  楼正国 《软件学报》2001,12(11):1732-1738
利用人体行走的单目动态图像序列,估计手臂及腿部的运动及结构参数.在物体中心坐标的运动表示方式下,提出了基于弹性连接概念的非刚体运动模型,通过加入弹性系数的方法,使非刚体运动模型和刚体运动模型得到了良好的统一.在此模型的基础上,用Levenberg-Marquardt方法实现了运动及结构参数的估计.实验证明了该方法的有效性,运动模型中的弹性系数也反映了一定程度的运动模式.  相似文献   

15.
采用基于误差线性系统稳定性准则的混沌控制方法,控制具有结构内阻尼的磁性刚体航天器在重力场与磁场共同作用下在圆形轨道的混沌姿态运动.讨论了航天器姿态运动方程中部分参数的取值对于运动姿态的影响,给出了这些参数通过倍周期分岔或逆倍周期分岔通往混沌的途径.当参数使系统做混沌姿态运动时,采用上述方法将混沌运动控制至周期-4轨道,并实现周期-1、2、4轨道之间转换的灵活控制.此外,分析了控制参数的变化对于控制效果的影响,并分别给出了控制至不同轨道时的输入扰动范围及控制参数范围.仿真结果表明,该方法能够实现混沌姿态运动在预定周期轨道间的灵活控制,且输入扰动量小、控制速度快、具有高精度,从而验证了该方法在航天器混沌姿态运动控制方面的有效性.  相似文献   

16.
17.
动态场景图像序列中运动目标检测新方法   总被引:1,自引:0,他引:1       下载免费PDF全文
在动态场景图像序列中检测运动目标时,如何消除因摄影机运动带来的图像帧间全局运动的影响,以便分割图像中的静止背景和运动物体,是一个必须解决的难题。针对复杂背景下动态场景图像序列的特性,给出了一种新的基于场景图像参考点3D位置恢复的图像背景判别方法和运动目标检测方法。首先,介绍了图像序列的层次化运动模型以及基于它的运动分割方法;然后,利用估计出的投影矩阵计算序列图像中各运动层的参考点3D位置,根据同一景物在不同帧中参考点3D位置恢复值的变化特性,来判别静止背景对应的运动层和运动目标对应的运动层,从而分割出图像中的静止背景和运动目标;最后,给出了动态场景图像序列中运动目标检测的详细算法。实验结果表明,新算法较好地解决了在具有多组帧间全局运动参数的动态场景序列图像中检测运动目标的问题,较大地提高了运动目标跟踪算法的有效性和鲁棒性。  相似文献   

18.
19.
Two approaches are described that improve the efficiency of optical flow computation without incurring loss of accuracy. The first approach segments images into regions of moving objects. The method is based on a previously defined Galerkin finite element method on a triangular mesh combined with a multiresolution segmentation approach for object flow computation. Images are automatically segmented into subdomains of moving objects by an algorithm that employs a hierarchy of mesh coarseness for the flow computation, and these subdomains are reconstructed over a finer mesh on which to recompute flow more accurately. The second approach uses an adaptive mesh in which the resolution increases where motion is found to occur. Optical flow is computed over a reasonably coarse mesh, and this is used to construct an optimal adaptive mesh in a way that is different from the gradient methods reported in the literature. The finite element mesh facilitates a reduction in computational effort by enabling processing to focus on particular objects of interest in a scene (i.e. those areas where motion is detected). The proposed methods were tested on real and synthetic image sequences, and promising results are reported.  相似文献   

20.
The necessary and sufficient conditions that an object should satisfy so that motion can be uniquely determined by a direct method are discussed. This direct method, based on the temporal-spatial gradient scheme, can estimate the three-dimensional (3-) motion parameters of a rigid moving object from an image sequence, by utilizing depth information of the object. It is shown that the 3-D motion cannot be uniquely determined for only eight kinds of objects with special geometric structure and surface pattern  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号