首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   97篇
  免费   0篇
工业技术   97篇
  2024年   1篇
  2020年   2篇
  2018年   2篇
  2016年   1篇
  2012年   4篇
  2011年   4篇
  2010年   5篇
  2009年   3篇
  2008年   7篇
  2007年   7篇
  2006年   3篇
  2005年   4篇
  2004年   2篇
  2003年   4篇
  2002年   1篇
  2001年   2篇
  2000年   5篇
  1999年   3篇
  1998年   5篇
  1997年   7篇
  1996年   1篇
  1994年   1篇
  1993年   2篇
  1992年   3篇
  1991年   3篇
  1990年   3篇
  1989年   3篇
  1988年   4篇
  1986年   1篇
  1985年   1篇
  1984年   2篇
  1980年   1篇
排序方式: 共有97条查询结果,搜索用时 78 毫秒
1.
2.
This paper presents a VLSI embodiment of an optical tracking computational sensor which focuses attention on a salient target in its field of view. Using both low-latency massive parallel processing and top-down sensory adaptation, the sensor suppresses interference front features irrelevant for the task at hand, and tracks a target of interest at speeds of up to 7000 pixels/s. The sensor locks onto the target to continuously provide control for the execution of a perceptually guided activity. The sensor prototype, a 24×24 array of cells, is built in 2-μm CMOS technology. Each cell occupies 62 μm×62 μm of silicon, and contains a photodetector and processing electronics  相似文献   
3.
The complete set of measurements that could ever be used by a passive 3D vision algorithm is the plenoptic function or light-field. We give a concise characterization of when the light-field of a Lambertian scene uniquely determines its shape and, conversely, when the shape is inherently ambiguous. In particular, we show that stereo computed from the light-field is ambiguous if and only if the scene is radiating light of a constant intensity (and color, etc.) over an extended region.  相似文献   
4.
Vision and navigation for the Carnegie-Mellon Navlab   总被引:7,自引:0,他引:7  
A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools. Results are described for vision and navigation tests using a mobile testbed that integrates perception and navigation capabilities that are based on two types of vision algorithms: color vision for road following, and 3-D vision for obstacle detection and avoidance. The perception modules are integrated into a system that allows the vehicle to drive continuously in an actual outdoor environment. The resulting system is able to navigate continuously on roads while avoiding obstacles  相似文献   
5.
Digitally recording dynamic events, such as sporting events, for experiencing in a spatio-temporally distant and arbitrary setting requires 4D capture: three dimensions for their geometry and appearance over the fourth dimension of time. Today's computer vision techniques make 4D capture possible. The virtualized reality system serves as an example on the general problem of digitizing dynamic events. In this article, we present the virtualized reality system's details from a historical perspective  相似文献   
6.
In this paper, we describe the explicit application of articulation constraints for estimating the motion of a system of articulated planes. We relate articulations to the relative homography between planes and show that these articulations translate into linearized equality constraints on a linear least-squares system, which can be solved efficiently using a Karush-Kuhn-Tucker system. The articulation constraints can be applied for both gradient-based and feature-based motion estimation algorithms and to illustrate this, we describe a gradient-based motion estimation algorithm for an affine camera and a feature-based motion estimation algorithm for a projective camera that explicitly enforces articulation constraints. We show that explicit application of articulation constraints leads to numerically stable estimates of motion. The simultaneous computation of motion estimates for all of the articulated planes in a scene allows us to handle scene areas where there is limited texture information and areas that leave the field of view. Our results demonstrate the wide applicability of the algorithm in a variety of challenging real-world cases such as human body tracking, motion estimation of rigid, piecewise planar scenes, and motion estimation of triangulated meshes.  相似文献   
7.
We present an appearance-based virtual view generation method that allows viewers to fly through a real dynamic scene. The scene is captured by multiple synchronized cameras. Arbitrary views are generated by interpolating two original camera-views near the given viewpoint. The quality of the generated synthetic view is determined by the precision, consistency and density of correspondences between the two images. All or most of previous work that uses interpolation extracts the correspondences from these two images. However, not only is it difficult to do so reliably (the task requires a good stereo algorithm), but also the two images alone sometimes do not have enough information, due to problems such as occlusion. Instead, we take advantage of the fact that we have many views, from which we can extract much more reliable and comprehensive 3D geometry of the scene as a 3D model. Dense and precise correspondences between the two images, to be used for interpolation, are obtained using this constructed 3D model.  相似文献   
8.
The authors present experimental results from an array of cells, each of which contains a photodiode and the analog signal-processing circuitry needed for light-stripe range finding. Prototype circuits were fabricated through MOSIS in a 2-μm CMOS p-well double-metal, double-poly process. This design builds on some of the ideas that have been developed for ICs that integrate signal-processing circuitry with photosensors. In the case of light-stripe range finding, the increase in cell complexity from sensing only to sensing and processing makes the modification of the operational principle of range finding practical, which in turn results in a dramatic improvement in performance. The IC array of photosensor and analog signal processor cells that acquires 1000 frames of light-stripe range data per second-two orders of magnitude faster than conventional light-stripe range-finding methods. The highly parallel range-finding algorithm used requires that the output of each photosensor site be continuously monitored. Prototype high-speed range-finding systems have been built using a 5×5 array and a 28×32 array of these sensing elements  相似文献   
9.
A Unified Gradient-Based Approach for Combining ASM into AAM   总被引:2,自引:0,他引:2  
Active Appearance Model (AAM) framework is a very useful method that can fit the shape and appearance model to the input image for various image analysis and synthesis problems. However, since the goal of the AAM fitting algorithm is to minimize the residual error between the model appearance and the input image, it often fails to accurately converge to the landmark points of the input image. To alleviate this weakness, we have combined Active Shape Models (ASM) into AAMs, in which ASMs try to find correct landmark points using the local profile model. Since the original objective function of the ASM search is not appropriate for combining these methods, we derive a gradient based iterative method by modifying the objective function of the ASM search. Then, we propose a new fitting method that combines the objective functions of both ASM and AAM into a single objective function in a gradient based optimization framework. Experimental results show that the proposed fitting method reduces the average fitting error when compared with existing fitting methods such as ASM, AAM, and Texture Constrained-ASM (TC-ASM) and improves the performance of facial expression recognition significantly.  相似文献   
10.
A compiler optimization is sound if the optimized program that it produces is semantically equivalent to the input program. The proofs of semantic equivalence are usually tedious. To reduce the efforts required, we identify a set of common transformation primitives that can be composed sequentially to obtain specifications of optimizing transformations. We also identify the conditions under which the transformation primitives preserve semantics and prove their sufficiency. Consequently, proving the soundness of an optimization reduces to showing that the soundness conditions of the underlying transformation primitives are satisfied.The program analysis required for optimization is defined over the input program whereas the soundness conditions of a transformation primitive need to be shown on the version of the program on which it is applied. We express both in a temporal logic. We also develop a logic called temporal transformation logic to correlate temporal properties over a program (seen as a Kripke structure) and its transformation.An interesting possibility created by this approach is a novel scheme for validating optimizer implementations. An optimizer can be instrumented to generate a trace of its transformations in terms of the transformation primitives. Conformance of the trace with the optimizer can be checked through simulation. If soundness conditions of the underlying primitives are satisfied by the trace then it preserves semantics.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号