首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
针对Turbo码在译码过程中迭代次数不确定的缺点,提出了一种新的改进算法,通过对附加信息(译码器的先验信息)间的距离度量作为迭代终止的判定。仿真试验表明,该改进算法能在保证译码的准确性基础上,避免了大量无畏的计算,提高译码速度,尤其在信噪比较大的情况下,效果更为明显。  相似文献   

2.
三维块匹配(BM3D)去噪是当前去噪性能最好的算法之一。但由于时间复杂度较高,而且需要输入精确的图像噪声水平参数,极大地限制该算法的广泛应用。因此,文中首先采用基于网格的块匹配策略,提出快速三维块匹配(FBM3D)算法。然后提出基于迭代的盲图像噪声水平估计算法,由SVM学习算法确定迭代的初始值,再由图像质量判定迭代是否终止。测试实验表明,与原始的BM3D算法相比,该算法在计算效率、视觉感知效果和定量评测方面均有明显改善。  相似文献   

3.
动态时间弯曲算法(DTW)是一种常见的时间序列相似性度量方法,对数据挖掘任务起着至关重要的作用。针对现有DTW算法的时间复杂度高、度量精确度一般的特征,提出一种DTW下界函数的提前终止算法(LB_ESDTW)。引入提前终止思想,提高算法的执行效率;再在提前终止算法思想的基础上,与DTW下界函数相结合,提出一种基于提前终止DTW的下界函数算法(LB_ESDTW)。该算法在保证高效的运行时间效率的同时,也使得算法的度量准确率得到了提升。实验结果表明,LB_ESDTW在绝大部分时间序列数据集中,都表现出良好的适应性,针对不同类别的时间序列,都能有良好的度量性能。  相似文献   

4.
针对传统的空间屏蔽滤波算法在影像受噪声污染较为严重时存在噪声滤除不尽的问题,提出迭代空间屏蔽滤波算法。算法通过对含噪影像进行多次空间屏蔽滤波将噪声最大程度滤除,并针对随迭代次数增加而产生的对有用信息过度滤除现象,提出以最小二乘准则作为迭代终止条件实现最佳迭代次数。实验结果表明,与传统的空间屏蔽滤波算法相比,迭代空间屏蔽滤波算法能够实现对噪声更加彻底的滤除,并且基于最小二乘准则可以使该算法终止于最佳迭代次数,避免产生过度滤除现象。  相似文献   

5.
提出一种判定这类线性循环程序是否终止的新方法,该方法通过分析循环变量每次迭代后的状态.最后得到循环条件的满足与否只是与变量的初始值和迭代的次数有关.从而判断该循环程序是否终止.根据该方法,不但能判断这一类程序是否终止.对于不是对所有输入都终止的程序,还能够给出程序终止的输入条件.  相似文献   

6.
基于SIFT特征检测的图像拼接优化算法研究*   总被引:7,自引:4,他引:3  
针对复杂场景下图像拼接,误匹配点比例较大时,传统匹配优化算法效率低,合成图像易产生鬼影等问题,在SIFT算法基础上,采用一种新的聚类方法预筛选特征点对,再用RANSAC算法精确提纯,减少算法迭代次数;并提出了改进的基于特征点的最佳缝合线与多分辨率样条法相结合的融合方法,提升了融合图像质量。实验结果表明,经过对以上两部分的改进,算法效率有较大提高,并能有效去除鬼影现象。  相似文献   

7.
整数提升小波多相矩阵分解系数不唯一,选取方法多样,计算量大。首先采用滤波器迭代次数选取算法,按照输入的信噪比(SNR)比例求出优化迭代次数;然后以非线性迭代比较算法为判定准则,结合求出的优化迭代次数,得到满足参数要求的优化分解系数。迭代次数是依据待测数据求得的,因此优化分解系数对该数据取得较好的处理效果,满足多相矩阵分解系数选取的要求。迭代比较算法满足收敛特性,通过比较滤波器的冲击和阶跃响应是否满足设定的误差限,可减少迭代运算次数,快速准确地选取优化小波系数。通过实验分析可知,该快速提取算法能有效满足数据处理的要求,减少待测数据处理的计算量,提高数据处理的效率。  相似文献   

8.
协同过滤推荐算法在推荐系统中发挥着重要作用,但其存在执行效率与排名精度较低的问题,交替最小二乘(ALS)算法可实现并行计算,从而提高执行效率,但是该算法数据加载与迭代收敛的时间较长。为此,将非线性共轭梯度(NCG)算法与ALS算法相结合,提出一种ALS-NCG算法,以达到加速ALS算法的目的。在Spark分布式数据处理环境中对ALS-NCG算法进行性能评估,实验结果表明,相比ALS算法,ALS-NCG算法获取高精度推荐排名时需要的迭代次数与时间更少。  相似文献   

9.
邹青志  黄山 《计算机科学》2017,44(3):278-282
针对Mean Shift算法难以跟踪快速运动目标、算法迭代次数多以及耗费时间长的问题,提出了一种基于Mean Shift的快速运动目标检测方法,该方法结合帧差法并融合背景信息来快速检测运动目标;同时提出一种新的相似性度量方法进行初步检测,排除干扰并快速选出符合标准的目标以进行Mean Shift匹配,找出最佳目标。该方法不仅减少了传统方法的迭代次数,缩短了算法所需时间,而且在跟踪实验中取得了较好的跟踪效果,提升了算法的鲁棒性。  相似文献   

10.
支持向量机所处理的数据绝大多数是精确值,但当训练样本中含有模糊信息时,支持向量机将无能为力。基于此,针对输入数据是模糊数的分类问题,提出一种带有去模糊函数的模糊支持向量机(FSVM*)。该算法采用模糊数间的距离作为模糊数去模糊的度量,从而构造去模糊函数将模糊值转化为精确值,同时将去模糊函数与模糊支持向量机相结合完成模糊数据的分类。数值结果表明:相比Forghani提出的FSVDD*算法,该算法更有效。  相似文献   

11.
王峰  蔡立志  张娟 《计算机应用研究》2021,38(11):3478-3483
针对低分辨率模糊图像实施超分辨率重建后出现大量伪影和边缘纹理不清晰问题,提出了一种双分支融合的反馈迭代金字塔算法.首先采用不同的分支模块分别提取低分辨率模糊图像中潜在的去模糊特征和超分辨率特征信息;然后采用自适应融合机制将两种不同性质的特征进行信息匹配,使网络在去模糊和超分辨率重建模块中更加关注模糊区域;其次使用迭代金字塔重建模块将低分辨率模糊图像渐进重建为逼近真实分布的超分辨率清晰图像;最后重建图像通过分支反馈模块生成清晰低分辨率图像,构建反馈监督.在GOPRO数据集中与现有算法的对比实验结果表明,所提算法能够生成纹理细节更加清晰的超分辨率图像.  相似文献   

12.
Jun Xia  Yue Shi  Hanchun Yin 《Displays》2009,30(1):27-31
Motion blurring on liquid crystal display (LCD) was modeled by the original image convoluted with a point spread function (PSF). An intensity independent PSF was first deduced by statistical approximation. Based on the PSF, a motion adaptive deblurring filter was proposed to restore the original image. The simulation of motion blurred and deblurred natural images were presented. The results indicate that the proposed deblurring filter can significantly reduce the visible blurring artifact on LCD, which has a simple one-dimensional structure and can be integrated into other video processing algorithms, e.g. frame rate doubling, to further improve the image quality.  相似文献   

13.
Dexterous legged robots can move on variable terrain at high speeds. The locomotion of these legged platforms on such terrain causes severe oscillations of the robot body depending on the surface and locomotion speed. Camera sensors mounted on such platforms experience the same disturbances, hence resulting in motion blur. This is a particular corruption of the image and results in information loss further resulting in degradation or loss of important image features. Although motion blur is a significant problem for legged mobile robots, it is of more general interest since it is present in many other handheld/mobile camera applications. Deblurring methods exist in the literature to compensate for blur, however most proposed performance metrics focus on the visual quality of compensated images. From the perspective of computer vision algorithms, feature detection performance is an essential factor that determines vision performance. In this study, we claim that existing image quality based metrics are not suitable to assess the performance of deblurring algorithms when the output is used for computer vision in general and legged robotics in particular. For comparatively evaluating deblurring algorithms, we define a novel performance metric based on the feature detection accuracy on sharp and deblurred images. We rank these algorithms according to the new metric as well as image quality based metrics from the literature and experimentally demonstrate that existing metrics may not be good indicators of algorithm performance, hence good selection criteria for computer vision application. Additionally, noting that a suitable data set to evaluate the effects of motion blur and its compensation for legged platforms is lacking in the literature, we develop a comprehensive multi-sensor data set for that purpose. The data set consists of monocular image sequences collected in synchronization with a low cost MEMS gyroscope, an accurate fiber optic gyroscope and an externally measured ground truth motion data. We make use of this data set for an extensive benchmarking of prominent motion deblurring methods from the literature in terms of existing and the proposed feature based metric.  相似文献   

14.
近年来,基于深度学习的运动模糊去除算法得到了广泛关注,然而单幅散焦图像去模糊算法鲜有研究。为针对性地解决单幅图像的散焦模糊问题,提出一种基于循环神经网络的散焦图像去模糊算法。首先级联两个残差网络,分别完成散焦图估计和图像去模糊;随后,为了保证散焦图和清晰图像的深度特征可以更好地跨阶段传播以及阶段内相互作用,在残差网络中引入LSTM(long short-term memory)循环层;最后,整个残差网络进行了多次迭代,迭代过程中网络参数共享。为了训练网络,制作了一个合成散焦图像数据集,每一张散焦图像都包含对应的清晰图像和散焦图。实验结果表明,该算法相较于对比算法在主客观图像质量评价上均有显著优势,在复原结果中具有更锐利的边缘和清晰的细节。对于真实双像素图像散焦模糊数据集DPD,该算法相比DPDNet-Single在峰值信噪比(PSNR)和结构相似性(SSIM)上分别提高了0.77 dB、5.6%,因此所提方法可以有效处理真实场景散焦模糊。  相似文献   

15.
图像模糊是指在图像捕捉或传输过程中,由于镜头或相机运动、光照条件等因素导致图像失去清晰度和细节,从而影响图像的质量和可用性。为了消除这种影响,图像去模糊技术应运而生。其目的在于通过构建计算机数学模型来衡量图像的模糊信息,从而自动预测去模糊后的清晰图像。图像去模糊算法的研究发展不仅为计算机视觉领域的其他任务提供了便利,同时也为生活领域提供了便捷和保障,如安全监控等。1)回顾了整个图像去模糊领域的发展历程,对盲图像去模糊和非盲图像去模糊中具有影响力的算法进行论述和分析。2)讨论了图像模糊的常见原因以及去模糊图像的质量评价方法。3)全面阐述了传统方法和基于深度学习方法的基本思想,并针对图像非盲去模糊和图像盲去模糊两方面的一些文献进行了综述。其中,基于深度学习的方法包括基于卷积神经网络、基于循环神经网络、基于生成式对抗网络和基于Transformer的方法等。4)简要介绍了图像去模糊领域的常用数据集并比较分析了一些代表性图像去模糊算法的性能。5)探讨了图像去模糊领域所面临的挑战,并对未来的研究方法进行了展望。  相似文献   

16.
Removing non-uniform blur caused by camera shaking is troublesome because of its high computational cost. We analyze the efficiency bottlenecks of a non-uniform deblurring algorithm and propose an efficient optical computation deblurring framework that implements the time-consuming and repeatedly required modules, i.e., non-uniform convolution and perspective warping, by light transportation. Specifically, the non-uniform convolution and perspective warping are optically computed by a hybrid system that is composed of an off-the-shelf projector and a camera mounted on a programmable motion platform. Benefitting from the high speed and parallelism of optical computation, our system has the potential to accelerate existing non-uniform motion deblurring algorithms significantly. To validate the effectiveness of the proposed approach, we also develop a prototype system that is incorporated into an iterative deblurring framework to effectively address the image blur of planar scenes that is caused by 3D camera rotation around the x-, y- and z-axes. The results show that the proposed approach has a high efficiency while obtaining a promising accuracy and has a high generalizability to more complex camera motions.  相似文献   

17.
针对压缩感知(CS)中迭代硬阈值类算法迭代次数多、重构时间长的问题,提出了一种基于混合梯度的硬阈值追踪(HGHTP)算法。首先,在每次迭代中计算当前迭代点处的梯度和共轭梯度,将梯度域与共轭梯度域下的支撑集混合取并集作为下一次迭代的候选支撑集,充分利用共轭梯度在支撑集选择策略中的有用信息,优化支撑集选择策略;然后,采用最小二乘法对候选支撑集进行二次筛选,快速精确地定位正确的支撑并更新稀疏系数。一维随机信号重构实验结果表明,HGHTP算法相较于同类迭代硬阈值算法,在保证重构成功率的前提下,需要的迭代次数更少。二维图像重构实验结果表明,HGHTP算法的重构精度和抗噪性能优于同类迭代阈值类算法,在保证重构精度的情况下,HGHTP算法的重构时间相比同类算法减少了32%以上。  相似文献   

18.
Deblurring Shaken and Partially Saturated Images   总被引:1,自引:0,他引:1  
We address the problem of deblurring images degraded by camera shake blur and saturated (over-exposed) pixels. Saturated pixels violate the common assumption that the image-formation process is linear, and often cause ringing in deblurred outputs. We provide an analysis of ringing in general, and show that in order to prevent ringing, it is insufficient to simply discard saturated pixels. We show that even when saturated pixels are removed, ringing is caused by attempting to estimate the values of latent pixels that are brighter than the sensor’s maximum output. Estimating these latent pixels is likely to cause large errors, and these errors propagate across the rest of the image in the form of ringing. We propose a new deblurring algorithm that locates these error-prone bright pixels in the latent sharp image, and by decoupling them from the remainder of the latent image, greatly reduces ringing. In addition, we propose an approximate forward model for saturated images, which allows us to estimate these error-prone pixels separately without causing artefacts. Results are shown for non-blind deblurring of real photographs containing saturated regions, demonstrating improved deblurred image quality compared to previous work.  相似文献   

19.
多尺度卷积神经网络被广泛应用在图像去模糊领域,但在不同尺度上对网络参数进行独立设定的方法会导致网络训练难,并且产生参数过大、稳定性降低、无约束解空间等问题。针对多尺度算法存在的上述问题提出了跨尺度共享网络权重并融合DenseNet的图像去模糊算法。该模型采用编码器-解码器结构,并通过引入密集块来改进该结构,从而形成独特的编解码器密集网络,能最大程度获取深层次特征信息。同时提出跨尺度权重共享的方法,使得在尺度迭代的过程中共享参数,显著降低了训练难度,明显提升了稳定性,优势是双重的。将训练所得模型在大规模运动图像去模糊数据集GOPRO和图像盲去模糊数据集Kohler上进行实验,结果表明,该模型在定性和定量条件下明显优于现有方法,并且能够同时在主观视觉和实验数据上优于其他算法。相比近年来该领域出现的其他方法,该方法具有更简单的网络结构、更少的参数和更容易训练等特点。提出的算法在主客观评价上都表现良好,能够处理多种模糊核,鲁棒性强,可应用于运动图像的去模糊处理。  相似文献   

20.
We study the convergence of two iterative Shape from Shading methods: the methods of Strat and of Smith. We try to determine the spectral radius of the Jacobian matrix of each iteration at any possible fixed point. We show that the method of Strat diverges for any image containing at least four pixels forming a square, any reflectance map and any relative weight between the irradiance term and the integrability term. An example is provided, in which divergence occurs after a large number of iterations, even if the reconstructed surface approaches the real surface after only a few iterations. We show then by a similar way that the method of Smith diverges for any image containing at least nine pixels forming a square, any reflectance map and any relative weight between the irradiance term and the smoothing term.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号