首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   34篇
  免费   0篇
工业技术   34篇
  2018年   1篇
  2009年   1篇
  2007年   2篇
  2005年   2篇
  2004年   1篇
  2003年   1篇
  2002年   2篇
  2000年   3篇
  1998年   3篇
  1997年   1篇
  1996年   1篇
  1995年   3篇
  1994年   1篇
  1993年   2篇
  1992年   2篇
  1991年   1篇
  1990年   3篇
  1989年   1篇
  1988年   3篇
排序方式: 共有34条查询结果,搜索用时 15 毫秒
1.
Ambiguity in Structure from Motion: Sphere versus Plane   总被引:1,自引:1,他引:0  
If 3D rigid motion can be correctly estimated from image sequences, the structure of the scene can be correctly derived using the equations for image formation. However, an error in the estimation of 3D motion will result in the computation of a distorted version of the scene structure. Of computational interest are these regions in space where the distortions are such that the depths become negative, because in order for the scene to be visible it has to lie in front of the image, and thus the corresponding depth estimates have to be positive. The stability analysis for the structure from motion problem presented in this paper investigates the optimal relationship between the errors in the estimated translational and rotational parameters of a rigid motion that results in the estimation of a minimum number of negative depth values. The input used is the value of the flow along some direction, which is more general than optic flow or correspondence. For a planar retina it is shown that the optimal configuration is achieved when the projections of the translational and rotational errors on the image plane are perpendicular. Furthermore, the projection of the actual and the estimated translation lie on a line through the center. For a spherical retina, given a rotational error, the optimal translation is the correct one; given a translational error, the optimal rotational negative deptherror depends both in direction and value on the actual and estimated translation as well as the scene in view. The proofs, besides illuminating the confounding of translation and rotation in structure from motion, have an important application to ecological optics. The same analysis provides a computational explanation of why it is easier to estimate self-motion in the case of a spherical retina and why shape can be estimated easily in the case of a planar retina, thus suggesting that nature's design of compound eyes (or panoramic vision) for flying systems and camera-type eyes for primates (and other systems that perform manipulation) is optimal.  相似文献   
2.
The problem of estimating 3D motion in an optimal manner using correspondences of features in two views is analyzed. The importance of having an optimal estimator is twofold: first, for the estimation itself and, second, for the bound it offers on how much sensitivity one can expect from a two-frame, point-based motion algorithm. The optimal estimator turns out to be nonlinear, and for that reason, techniques that provide very good initial guesses for the iterative computation of the optimal estimator are developed  相似文献   
3.
Perceptual processes responsible for computing shape from several cues, including shading, texture, contour, and stereo, are examined. It is noted that these computational problems, as well as that of computing shaping from motion, are ill-posed in the sense of Hadamard. It is suggested that regularization theory can be used along with a priori knowledge to restrict the space of possible solutions, and thus restore the problem's well-prosedness. Some alternative methods are outlined, and the idea of active vision is explored briefly in connection with the problem  相似文献   
4.
The findings of a workshop, the goals of which were to identify applications, research problems, and designs of high performance computing and communications (HPCC) systems for supporting applications are discussed. In computer vision, the main scientific issues are machine learning, surface reconstruction, inverse optics and integration, model acquisition, and perception and action. In speech and natural language processing (SNLP), issues were identified statistical analysis in corpus-based speech and language understanding, search strategies for language analysis, auditory and vocal-tract modeling, integration of multiple levels of speech and language analyses, and connectionist systems. In AI, important issues that need immediate attention include the development of efficient machine learning and heuristic search methods that can adapt to different architectural configurations, and the design and construction of scalable and verifiable knowledge bases, active memories, and artificial neural networks  相似文献   
5.
Observability of 3D Motion   总被引:2,自引:2,他引:0  
This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the epipolar constraint, applied to motion fields, and the other the positive depth constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors in the 3D motion parameters at the places representing the minima of the functions. For conventional video cameras possessing a restricted field of view, the analysis shows that for algorithms in both classes which estimate all motion parameters simultaneously, the obtained solution has an error such that the projections of the translational and rotational errors on the image plane are perpendicular to each other. Furthermore, the estimated projection of the translation on the image lies on a line through the origin and the projection of the real translation. The situation is different for a camera with a full (360 degree) field of view (achieved by a panoramic sensor or by a system of conventional cameras). In this case, at the locations of the minima of the above two functions, either the translational or the rotational error becomes zero, while in the case of a restricted field of view both errors are non-zero. Although some ambiguities still remain in the full field of view case, the implication is that visual navigation tasks, such as visual servoing, involving 3D motion estimation are easier to solve by employing panoramic vision. Also, the analysis makes it possible to compare properties of algorithms that first estimate the translation and on the basis of the translational result estimate the rotation, algorithms that do the opposite, and algorithms that estimate all motion parameters simultaneously, thus providing a sound framework for the observability of 3D motion. Finally, the introduced framework points to new avenues for studying the stability of image-based servoing schemes.  相似文献   
6.
The classic approach to structure from motion entails a clear separation between motion estimation and structure estimation and between two-dimensional (2D) and three-dimensional (3D) information. For the recovery of the rigid transformation between different views only 2D image measurements are used. To have available enough information, most existing techniques are based on the intermediate computation of optical flow which, however, poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) accurately estimate flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. This paper introduces a novel approach to structure from motion which addresses the processes of smoothing, 3D motion and structure estimation in a synergistic manner. It provides an algorithm for estimating the transformation between two views obtained by either a calibrated or uncalibrated camera. The results of the estimation are then utilized to perform a reconstruction of the scene from a short sequence of images.The technique is based on constraints on image derivatives which involve the 3D motion and shape of the scene, leading to a geometric and statistical estimation problem. The interaction between 3D motion and shape allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that vary locally more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the least varying estimated depth as well as the image locations of scene discontinuities. We analyze the new constraints, show their relationship to the minimization of the epipolar constraint, and present experimental results using real image sequences that indicate the robustness of the method.  相似文献   
7.
8.
Active vision   总被引:16,自引:5,他引:11  
We investigate several basic problems in vision under the assumption that the observer is active. An observer is called active when engaged in some kind of activity whose purpose is to control the geometric parameters of the sensory apparatus. The purpose of the activity is to manipulate the constraints underlying the observed phenomena in order to improve the quality of the perceptual results. For example a monocular observer that moves with a known or unknown motion or a binocular observer that can rotate his eyes and track environmental objects are just two examples of an observer that we call active. We prove that an active observer can solve basic vision problems in a much more efficient way than a passive one. Problems that are ill-posed and nonlinear for a passive observer become well-posed and linear for an active observer. In particular, the problems of shape from shading and depth computation, shape from contour, shape from texture, and structure from motion are shown to be much easier for an active observer than for a passive one. It has to be emphasized that correspondence is not used in our approach, i.e., active vision is not correspondence of features from multiple viewpoints. Finally, active vision here does not mean active sensing, and this paper introduces a general methodology, a general framework in which we believe low-level vision problems should be addressed.The author is Yiannis  相似文献   
9.
Due to the aperture problem, the only motion measurement in images, whose computation does not require any assumptions about the scene in view, is normal flow—the projection of image motion on the gradient direction. In this paper we show how a monocular observer can estimate its 3D motion relative to the scene by using normal flow measurements in a global and qualitative way. The problem is addressed through a search technique. By checking constraints imposed by 3D motion parameters on the normal flow field, the possible space of solutions is gradually reduced. In the four modules that comprise the solution, constraints of increasing restriction are considered, culminating in testing every single normal flow value for its consistency with a set of motion parameters. The fact that motion is rigid defines geometric relations between certain values of the normal flow field. The selected values form patterns in the image plane that are dependent on only some of the motion parameters. These patterns, which are determined by the signs of the normal flow values, are searched for in order to find the axes of translation and rotation. The third rotational component is computed from normal flow vectors that are only due to rotational motion. Finally, by looking at the complete data set, all solutions that cannot give rise to the given normal flow field are discarded from the solution space.Research supported in part by NSF (Grant IRI-90-57934), ONR (Contract N00014-93-1-0257) and ARPA (Order No. 8459).  相似文献   
10.
Our work on active vision has recently focused on the computational modelling of navigational tasks, where our investigations were guided by the idea of approaching vision for behavioural systems in the form of modules that are directly related to perceptual tasks. These studies led us to branch in various directions and inquire into the problems that have to be addressed in order to obtain an overall understanding of perceptual systems. In this paper, we present our views about the architecture of vision systems, about how to tackle the design and analysis of perceptual systems, and promising future research directions. Our suggested approach for understanding behavioural vision to realize the relationships of perception and action builds on two earlier approaches, the Medusa philosophy1 and the Synthetic approach2. The resulting framework calls for synthesizing an artificial vision system by studying vision competences of increasing complexity and, at the same time, pursuing the integration of the perceptual components with action and learning modules. We expect that computer vision research in the future will progress in tight collaboration with many other disciplines that are concerned with empirical approaches to vision, i.e. the understanding of biological vision. Throughout the paper, we describe biological findings that motivate computational arguments which we believe will influence studies of computer vision in the near future.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号