首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
《数码摄影》2012,(11):108-109
上期本刊的新品栏目曾报道过这台业界的第一部全画幅单电数码相机。除了全幅Exmor CMOS,α99最突出的特点是双重自动对焦系统和视频功能。索尼的SLT单电相机家族不断壮大。从配备APS-C画幅传感器的业余及准专业机型到现在加入了配备全画幅CMOS的旗舰机型α99,其有效像素达2430万。  相似文献   

2.
《数码摄影》2013,(10):162-165
前不久,佳能发布了具有划时代意义的APS画幅数码单反相机EOS 70D,其最大亮点当属全新推出的全像素双核CMOS AF技术。AF(Auto Focus)自动对焦无疑是现代相机最重要的功能之一。早在上世纪六七十年代,尼康、佳能就已经推出具有自动对焦功能的原型机,1977年世界上第一台拥有自动对焦功能的相机诞生——柯尼卡C35 AF。近几年自动对焦技术又有了长足的进步,无论在对焦速度还是准确性方面都有了极大的提升。这里我们就来梳理一下现今数码相机产品使用的各项对焦技术。反差检测对焦系统:常用于消费类数码相机,也应用于早期的数码微单产品,通过不断检测感光元件捕捉画面的对比度反差来实现自动对焦。这一对焦方式的速度较慢,但成本低,无需配备单独的对焦模块。  相似文献   

3.
单幅自然场景深度恢复   总被引:1,自引:1,他引:0       下载免费PDF全文
离焦测距算法是一种用于恢复场景深度信息的常用算法。传统的离焦测距算法通常需要采集多幅离焦图像,实际应用中具有很大的制约性。文中基于局部模糊估计提出单幅离焦图像深度恢复算法。基于局部模糊一致性的假设,本文采用简单而有效的两步法恢复输入图像的深度信息:1)通过求取输入离焦图和利用已知高斯核再次模糊图之间的梯度比得到边缘处稀疏模糊图 2)将边缘位置模糊值扩离至全部图像,完整的相对深度信息即可恢复。为了获得准确的场景深度信息,本文加入几何条件约束、天空区域提取策略来消除颜色、纹理以及焦点平面歧义性带来的影响,文中对各种类型的图片进行对比实验,结果表明该算法能在恢复深度信息的同时有效抑制图像中的歧义性。  相似文献   

4.
杨晓洁  杨字红  王慈 《计算机工程》2011,37(18):211-213
提出一种多聚焦散焦图像复原算法。基于物体成像的点扩散函数退化模型,通过同一场景(包括前景和后景)中任意2幅对焦不同的模糊图像,得到该场景比较清晰的图像。同时,利用点扩散函数的模糊半径与物体深度信息之间的对应关系,估计物体的深度信息。实验结果证明了该算法的有效性。  相似文献   

5.
一种快速的三维扫描数据自动配准方法   总被引:2,自引:0,他引:2  
杨棽  齐越  沈旭昆  赵沁平 《软件学报》2010,21(6):1438-1450
研究了两幅和多幅深度图像的自动配准问题.在配准两幅深度图像时,结合二维纹理图像配准深度图像,具体过程是:首先,从扫描数据中提取纹理图像,特别地,针对不包含纹理图像的扫描数据提出了一种根据深度图像直接生成纹理图像的方法;然后,基于SIFT(scale-invariant feature transform)特征提取纹理图像中的兴趣像素,并通过预过滤和交叉检验兴趣像素等方法从中找出匹配像素对的候选集;之后,使用RANSAC(random sample consensus)算法,根据三维几何信息的约束找出候选集中正确的匹配像素对和相对应的匹配顶点对,并根据这些匹配顶点对计算出两幅深度图像间的刚体置换矩阵;最后,使用改进的ICP(iterative closest point)算法优化这一结果.在配准多幅深度图像时,提出了一种快速构建模型图的方法,可以避免对任意两幅深度图像作配准,提高了配准速度.该方法已成功应用于多种文物的三维逼真建模.  相似文献   

6.
利用2维离焦图像恢复景物的3维深度信息是计算机视觉中一个重要的研究方向。但是,在获取不同程度的离焦图像时,必须改变摄像机参数,例如,调节摄像机的焦距、像距或者光圈大小等。而在一些需要高倍放大观测的场合,使用的高倍精密摄像机的景深非常小,任何摄像机参数的改变都会对摄像机产生破坏性的后果,这在很大程度上限制了当前许多离焦深度恢复算法的应用范围。因此,提出了一种新的通过物距变化恢复景物全局深度信息的方法。首先,改变景物的物距获取两幅离焦程度不同的图像,然后,利用相对模糊度及热辐射方程建立模糊成像模型,最后,将景物深度信息的计算转化成一个动态优化问题并求解,获得全局景物深度信息。该方法不需改变任何摄像机参数或者计算景物的清晰图像,操作简单。仿真试验和误差分析结果表明,该方法可以实现高精度的深度信息恢复,适合应用于微纳米操作、高精度快速检测等对摄像机参数改变较为敏感的场合。  相似文献   

7.
为了提高利用深度神经网络预测单图像深度信息的精确度,提出了一种采用自监督卷积神经网络进行单图像深度估计的方法.首先,该方法通过在编解码结构中引入残差结构、密集连接结构和跳跃连接等方式改进了单图像深度估计卷积神经网络,改善了网络的学习效率和性能,加快了网络的收敛速度;其次,通过结合灰度相似性、视差平滑和左右视差匹配等损失度量设计了一种更有效的损失函数,有效地降低了图像光照因素影响,遏制了图像深度的不连续性,并能保证左右视差的一致性,从而提高深度估计的鲁棒性;最后,采用立体图像作为训练数据,无需目标深度监督信息,实现了端到端的单幅图像深度估计.在TensorFlow框架下,用KITTI和Cityscapes数据集进行实验,结果表明,与目前的主流方法相比,该方法在预测深度的精确度方面有较大提升,拥有更好的深度预测性能.  相似文献   

8.
条纹理映射   总被引:1,自引:1,他引:1  
李奎宇  王文成  吴恩华 《软件学报》2004,15(Z1):179-189
提出一种新的基于图像绘制的方法,能高效地利用图形硬件进行加速,避免成像过程中烦人的空洞填补计算,高质量地反映物体表面的三雏凹凸细节及视差变化.该方法首先为源深度图像中每个像素生成一个条状的纹理(条纹理),即根据一个像素及其邻近像素的深度值,沿着深度方向为该像素插值生成多个具有不同深度值的点,它们一起构成谊像素的条纹理;然后,绘制时利用图形硬件的纹理映射直接处理这些条纹理.由于各个像素的条纹理的集合可以构成源场景的近似连续三维表示,从而大大增强了源深度图像表达三维模型细节的能力,而且成像时不必进行空洞填补的计算.与已有的同类方法相比,新方法的空间需求小,成像速度快,并且成像质量很高.  相似文献   

9.
针对现有光场深度估计方法存在的计算时间长和精度低的问题,提出了一种融合光场结构特征的基于编码-解码器架构的光场深度估计方法.该方法基于卷积神经网络,采用端到端的方式进行计算,一次输入光场图像就可获得场景视差信息,计算量远低于传统方法,大大缩短了计算时间.为提高计算精确度,网络模型以光场图像的多方向极平面图堆叠体(Epipolar Plane Image Volume,EPI-volume)为输入,先利用多路编码模块对输入的光场图像进行特征提取,再使用带跳跃连接的编码-解码器架构进行特征聚合,使网络在逐像素视差估计时能够融合目标像素点邻域的上下文信息.此外,模型采取不同深度的卷积块从中心视角图中提取场景的结构特征,并将该结构特征引入对应的跳跃连接中,为视差图预测提供了额外的边缘特征参考,进一步提高了计算精确度.对HCI-4D光场基准测试集的实验结果表明,所提方法的坏像素率(BadPix)指标比对比方法降低了31.2%,均方误差(MSE)指标比对比方法降低了54.6%.对于基准测试集中的光场图像,深度估计的平均计算时间为1.2 s,计算速度远超对比方法.  相似文献   

10.
林忠  黄陈蓉  卢阿丽 《计算机应用》2015,35(10):2969-2973
为了改善在某些场景中由于聚焦评价函数非单峰性而造成爬山搜索方法正确率降低、误差增大的问题,设计了一种基于离焦量定性差异度量的自动对焦方法。首先,利用基于空间域的卷积/去卷积变换计算对焦过程中两个不同调焦位置的两幅图像中对应点的离焦量差异值;接着,采用投票策略得出这两幅图像的离焦量差异定性度量;然后,根据离焦量差异定性度量确定对焦搜索方向;最后,按照变步长策略逐渐缩小搜索范围和搜索步长,直至在步长为1时找到合焦位置。在由18倍光学变焦的监控摄像机上采集的3个图像序列上展示了该方法与其他两种典型的基于聚焦评价函数的爬山自动对焦方法的对比,实验结果表明:所提方法在保持爬山搜索法快速、行程比较少等优点的同时,明显提高了在聚焦评价函数单峰性不良的场景中的正确率,降低了误差量,很好地解决了局部极值对于爬山搜索法的影响。  相似文献   

11.
A new sense for depth of field   总被引:19,自引:0,他引:19  
This paper examines a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems. Previously, autofocus schemes have used depth of field to measured depth by searching for the lens setting that gives the best focus, repeating this search separately for each image point. This search is unnecessary, for there is a smooth gradient of focus as a function of depth. By measuring the amount of defocus, therefore, we can estimate depth simultaneously at all points, using only one or two images. It is proved that this source of information can be used to make reliable depth maps of useful accuracy with relatively minimal computation. Experiments with realistic imagery show that measurement of these optical gradients can provide depth information roughly comparable to stereo disparity or motion parallax, while avoiding image-to-image matching problems.  相似文献   

12.
In recent years, stereo matching based on dynamic programming (DP) has been widely studied and various tree structures are proposed to improve the matching accuracy. However, previous DP-based algorithms do not incorporate all the smoothness functions determined by the edges between the adjacent pixels in the image, which will usually lead to lower matching accuracies. In this paper, we propose a novel stereo matching algorithm based on weighted dynamic programming on a single-direction four-connected (SDFC) tree. The SDFC tree structure is a new tree structure which includes all the edges in the image and the disparity of a pixel can be affected by all the edges in the image. However, in the SDFC tree, conventional DP-based algorithms will make the pixels that are far away from the root node provide higher energy than the nearby pixels, which will decrease the matching accuracy. So, the weighted dynamic programming approach is proposed to optimize the energy function on the new tree structure, and all the pixels in the SDFC tree are treated equivalently. Dynamic programming in the SDFC tree of every pixel in the image separately is very time-consuming, so a fast DP optimization method is designed for the SDFC tree, which reduces the computational complexity of the proposed weighted DP algorithm to 12 times of conventional DP based algorithm. Experiments show that our algorithm not only produces quite smooth and reasonable disparity maps which are close to the state-of-the-art results, but also can be implemented quite efficiently. Performance evaluations on the Middlebury data set show that our method ranks top in all the DP-based stereo matching algorithms, even better than the algorithms that apply segmentation techniques. Experimental results in an unmanned ground vehicle (UGV) test bed show that our algorithm gets very good matching results in different outdoor conditions, even on the asphaltic road which is considered to be textureless. This illustrates the robustness of our algorithm.  相似文献   

13.
In stereo vision a pair of two-dimensional (2D) stereo images is given and the purpose is to find out the depth (disparity) of regions of the image in relation to the background, so that we can reconstruct the 3D structure of the image from the pair of 2D stereo images given. Using the Bayesian framework we implemented the Forward-Backward algorithm to unfold the disparity (depth) of a pair of stereo images. The results showed are very reasonable, but we point out there is room for improvement concerning the graph structure used.  相似文献   

14.
立体图像对的生成   总被引:1,自引:0,他引:1  
获取同一场景的立体图像对是实现双目立体成像的一个关键问题。提出了一种在三维场景已经建好的情况下生成立体图像对的方法。该方法根据双目立体视觉的原理,利用3DS MAX中的摄像机对象对场景中的物体进行坐标变换和透视投影变换,分别生成左眼视图和右眼视图。实验结果表明,两个目标摄像机与三维模型的位置关系以及基线长度是影响立体效果的重要因素,改变目标摄像机与三维模型的位置,可以分别生成正视差、负视差的立体图像对,当AB与CO的比例参数为0.05时,生成的立体图像对的立体效果较佳。  相似文献   

15.
Reducing the defocus blur that arises from the finite aperture size and short exposure time is an essential problem in computational photography. It is very challenging because the blur kernel is spatially varying and difficult to estimate by traditional methods. Due to its great breakthrough in low-level tasks, convolutional neural networks (CNNs) have been introduced to the defocus deblurring problem and achieved significant progress. However, previous methods apply the same learned kernel for different regions of the defocus blurred images, thus it is difficult to handle nonuniform blurred images. To this end, this study designs a novel blur-aware multi-branch network (BaMBNet), in which different regions are treated differentially. In particular, we estimate the blur amounts of different regions by the internal geometric constraint of the dual-pixel (DP) data, which measures the defocus disparity between the left and right views. Based on the assumption that different image regions with different blur amounts have different deblurring difficulties, we leverage different networks with different capacities to treat different image regions. Moreover, we introduce a meta-learning defocus mask generation algorithm to assign each pixel to a proper branch. In this way, we can expect to maintain the information of the clear regions well while recovering the missing details of the blurred regions. Both quantitative and qualitative experiments demonstrate that our BaMBNet outperforms the state-of-the-art (SOTA) methods. For the dual-pixel defocus deblurring (DPD)-blur dataset, the proposed BaMBNet achieves 1.20 dB gain over the previous SOTA method in term of peak signal-to-noise ratio (PSNR) and reduces learnable parameters by 85%. The details of the code and dataset are available at https://github.com/junjun-jiang/BaMBNet.   相似文献   

16.
Depth Discontinuities by Pixel-to-Pixel Stereo   总被引:9,自引:1,他引:8  
An algorithm to detect depth discontinuities from a stereo pair of images is presented. The algorithm matches individual pixels in corresponding scanline pairs, while allowing occluded pixels to remain unmatched, then propagates the information between scanlines by means of a fast postprocessor. The algorithm handles large untextured regions, uses a measure of pixel dissimilarity that is insensitive to image sampling, and prunes bad search nodes to increase the speed of dynamic programming. The computation is relatively fast, taking about 600 nanoseconds per pixel per disparity on a personal computer. Approximate disparity maps and precise depth discontinuities (along both horizontal and vertical boundaries) are shown for several stereo image pairs containing textured, untextured, fronto-parallel, and slanted objects in indoor and outdoor scenes.  相似文献   

17.
基于双序列比对算法的立体图像匹配方法*   总被引:1,自引:1,他引:0  
在分析现有立体匹配方法的基础上,提出一种基于双序列比对算法的立体图像匹配方法。将立体图像对中同名极线上的像素灰度值看做是一对字符序列,使用基于动态规划思想的双序列比对算法对这些对字符序列进行匹配,以获取立体图像视差。为验证该方法的可行性和适用性,采用人脸立体图像对进行实验。实验结果表明,使用该方法进行立体图像匹配能获得光滑的、稠密的视差图。基于动态规划思想的双序列比对算法能够有效地解决立体图像匹配问题,从而为图像的立体匹配提供了一个实用有效的方法。  相似文献   

18.
由散焦图像求深度是计算机视觉中一个非常重要的课题。散焦图像中点的模糊程度随物体的深度而变化,因此可以利用散焦图像估计物体的深度信息,该方法不存在立体视觉和运动视觉中对应点的匹配问题,具有很好的应用前景。研究了一种基于散焦图像空间的深度估计算法:将散焦成像描述成热扩散过程,借助形变函数将两幅散焦图像扩张成一个散焦空间,再估计出形变参数,进而恢复物体的深度信息。最后利用实验验证了算法的有效性。  相似文献   

19.
We propose a 3D environment modelling method using multiple pairs of high-resolution spherical images. Spherical images of a scene are captured using a rotating line scan camera. Reconstruction is based on stereo image pairs with a vertical displacement between camera views. A 3D mesh model for each pair of spherical images is reconstructed by stereo matching. For accurate surface reconstruction, we propose a PDE-based disparity estimation method which produces continuous depth fields with sharp depth discontinuities even in occluded and highly textured regions. A full environment model is constructed by fusion of partial reconstruction from spherical stereo pairs at multiple widely spaced locations. To avoid camera calibration steps for all camera locations, we calculate 3D rigid transforms between capture points using feature matching and register all meshes into a unified coordinate system. Finally a complete 3D model of the environment is generated by selecting the most reliable observations among overlapped surface measurements considering surface visibility, orientation and distance from the camera. We analyse the characteristics and behaviour of errors for spherical stereo imaging. Performance of the proposed algorithm is evaluated against ground-truth from the Middlebury stereo test bed and LIDAR scans. Results are also compared with conventional structure-from-motion algorithms. The final composite model is rendered from a wide range of viewpoints with high quality textures.  相似文献   

20.
在立体视觉中,视差间接反映物体的深度信息,视差计算是深度计算的基础。常见的视差计算方法研究都是面向双目立体视觉,而双焦单目立体视觉的视差分布不同于双目视差,具有沿极线辐射的特点。针对双焦单目立体视觉的特点,提出了一种单目立体视差的计算方法。对于计算到的初步视差图,把视差点分类为匹配计算点和误匹配点。通过均值偏移向量(Mean Shift)算法,实现了对误匹配点依赖于匹配点和图像分割的视差估计,最终得到致密准确的视差图。实验证明,这种方法可以通过双焦立体图像对高效地获得场景的视差图。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号