首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
文章提出了一种最大概率匹配的矢量量化编码算法,它为码书中的每一码字增加一个计数器,统计在编码图象时每个码字的出现的频数,并进行排序;在量化矢量时,根据当前码字出现频数大小依次选择侯选码字,即频数大的码字优先选为候选码字。该算法可以和已有的预测法结合,形成预测加最大概率匹配的联合矢量量化编码算法。实验表明,联合算法的效率较高,在最初几次的搜索中就能以较高的命中率命中最佳匹配码字。  相似文献   

2.
目前对传统LBG算法的改进措施一般以增加时间开销作为代价.本文提出一种新的矢量量化码书设计改进措施--初始码字间距最大化:初始码书中的码字全部来自输入的训练矢量,且每一个新的初始码字尽可能地远离现有的码字,实验结果表明:本算法完全消除了空胞腔现象,更有效地避免了局部最优,能获得质量更高的码书;收敛速度快,具有较低的时间消耗.本算法在时间开销以及码书质量这两个方面都优于传统LBG和基于人工蚁群优化的码书设计算法等改进算法.  相似文献   

3.
论文提出一种等和值块扩展最近邻矢量量化码字搜索算法。该算法将码书按和值大小排序分块,并将每一块中间或中间附近的码字的和值作为本码书块的特征和值。编码时,查找与输入矢量和值距离最近的码书块并作为初始匹配码书块。然后在该码书块附近上下扩展搜索相邻码书块中距输入矢量最近的码字。该算法具有无复杂运算的特点,易于VLSI技术实现。仿真结果表明,该算法是一种有效的码字搜索算法。  相似文献   

4.
针对图像矢量量化编码的复杂性,提出了一种新颖的快速最近邻码字搜索算法。该算法首先计算出每个码字和输入矢量的哈德码变换,然后为输入矢量选取范数距离最近的初始匹配码字,利用多控制点的三角不等式和两条有效的码字排除准则,把不匹配的码字排除,最后选取与输入矢量最匹配的码字。实验结果表明,新算法相比于其他算法,在保证编码质量的前提下,码字搜索时间和计算量均有了明显降低。  相似文献   

5.
利用矢量量化码书作为数据分类模式最优代表集的特点,提出基于码书的离群点概念,论证了其与经典统计学关于离群点定义的内在联系。在基于学习的矢量量化码书生成算法和最近邻码字搜索算法基础上构造了离群点检测算法。实验结果表明了提出的关于离群点定义的合理性和算法的有效性。  相似文献   

6.
为了克服传统LBG算法中的空胞腔现象,提出了一种基于码字间距最大化的新的空胞腔策略。利用离码书距离最大的输入矢量来修改胞腔中的码字,旨在形成码字的合理分布,减小矢量量化的平均失真。实验结果表明:提出的策略能有效地消除空胞腔现象,获得性能较好的码书,其峰值信噪比比传统的LBG算法提高了3 dB。  相似文献   

7.
矢量量化是一种有效的数据压缩技术,由于其算法简单,具有较高的压缩率,因而被广泛应用于数据压缩编码领域。通过对图像块灰度特征的研究,根据图像的平滑与否,提出了对图像进行均值和矢量量化复合编码算法,该算法对平滑图像块采用均值编码,对非平滑块采用矢量量化编码。这不仅节省了平滑码字的存储空间,提高了码书存储效率,并且编码速度大大提高。同时采用码字旋转反色(2R)压缩算法将码书的存储容量减少到1/8,并结合最近邻块扩展搜索算法(EBNNS)对搜索算法进行优化。在保证图像画质的前提下,整个系统的图像编码速度比全搜索的普通矢量量化平均提高约7.7倍。  相似文献   

8.
郭艳菊  陈雷  陈国鹰 《计算机应用》2013,33(9):2573-2576
为了进一步提高图像矢量量化的码书质量,提出了一种新的图像压缩矢量量化码书设计算法。该算法采用均方误差(MSE)作为码书设计的适应度函数,利用改进的人工蜂群算法进行适应度函数的优化求解,增强了算法的自组织性和收敛性,大大减少了陷入局部收敛的可能性。将一种基于和值特性的快速码字搜索思想引入到码书设计算法中,使算法计算量明显降低。仿真结果表明,该算法具有计算时间短、收敛速度快的优点,并且生成的码书质量好、稳定性强。  相似文献   

9.
一种基于MFCC和LPCC的文本相关说话人识别方法   总被引:1,自引:0,他引:1  
于明  袁玉倩  董浩  王哲 《计算机应用》2006,26(4):883-885
在说话人识别的建模过程中,为传统矢量量化模型的码字增加了方差分量,形成了一种新的连续码字分布的矢量量化模型。同时采用美尔倒谱系数及其差分和线性预测倒谱系数及其差分相结合作为识别的特征参数,来进行与文本有关的说话人识别。通过与动态时间规整算法和传统的矢量量化方法进行比较表明,在系统响应时间并未明显增加的基础上,该模型识别率有一定提高。  相似文献   

10.
基于稳健统计的矢量量化器设计算法   总被引:1,自引:0,他引:1       下载免费PDF全文
L BG算法作为矢量量化的基本算法具有经典意义 ,但由于在训练图象中 ,总存在少量的离群矢量 ,使得在训练码书时 ,码字的分布受到影响 ,进而使得压缩性能下降 ,因而不能充分体现出矢量量化的优越性能 .而运用基于稳健统计的方法来设计矢量量化器 ,由于减少了码书中的离群矢量 ,同时加强了中心矢量在码书中的权重 ,因而不仅能够尽量减少码书的冗余 ,而且能大幅度提高压缩性能 .实验结果显示 ,用基于稳健统计的设计方法设计的码书 ,其压缩性能比传统的 L BG算法有了较大的改善 ,且恢复图象的主观、客观效果都是令人满意的 .  相似文献   

11.
In this paper an adaptive hierarchical algorithm of vector quantization for image coding is proposed. First the basic codebook is generated adaptively, then the codes are coded into higher-level codes by creating an index codebook using the redundance presented in the codes. This hierarchical scheme lowers the bit rate significantly and causes little more computation and no more distortion than the single-layer adaptive VQ algorithm does which is used to create the basic codebook.  相似文献   

12.
This paper presents a novel approach for action recognition, localization and video matching based on a hierarchical codebook model of local spatio-temporal video volumes. Given a single example of an activity as a query video, the proposed method finds similar videos to the query in a target video dataset. The method is based on the bag of video words (BOV) representation and does not require prior knowledge about actions, background subtraction, motion estimation or tracking. It is also robust to spatial and temporal scale changes, as well as some deformations. The hierarchical algorithm codes a video as a compact set of spatio-temporal volumes, while considering their spatio-temporal compositions in order to account for spatial and temporal contextual information. This hierarchy is achieved by first constructing a codebook of spatio-temporal video volumes. Then a large contextual volume containing many spatio-temporal volumes (ensemble of volumes) is considered. These ensembles are used to construct a probabilistic model of video volumes and their spatio-temporal compositions. The algorithm was applied to three available video datasets for action recognition with different complexities (KTH, Weizmann, and MSR II) and the results were superior to other approaches, especially in the case of a single training example and cross-dataset1 action recognition.  相似文献   

13.
针对已有的笔迹鉴别方法对笔迹版式的要求比较严格、训练过程耗时、对内容不受限制的小样本数据情况下鉴别性能较低等问题, 提出了基于混合码本与因子分析的文本独立笔迹鉴别算法. 该算法提取写作时常用的子图像, 并用描述符标注“代码”建立“码本”. 在特征提取层, 分别采用加权的方向指数直方图法和距离变换法, 对于具有相同描述符的“代码”计算特征距离. 把影响特征距离的因素分为书写因子和字符因子, 对码本中的每个书写模式进行双因子方差分析. 在IAM和Firemaker这两个标准数据集上的实验结果证明, 相比目前国内外的先进已有方法, 本文提出的算法在精度和速度方面有一定的优势, 具有一定的推广价值, 适合处理多语种的笔迹鉴别问题.  相似文献   

14.
This paper tackles the optimization of non-unitary linear precoding design for orthogonal spacetime block codes (OSTBCs). We dig out the transmission potentials by the analysis from eigen-space point of view according to the unique structure of OSTBCs. The proposed precoding form is proven to be theoretically optimized. Compared with the classical unitary Grassmannian codebook design, the non-unitary codebook further improves the overall performance of practical systems. The constraint on codebook size to g...  相似文献   

15.
图像经过矢量量化后得到的索引图具有很强的统计相关性,从而使得邻近块的索引以较大的概率相等或偏移量较小。按照某种准则对码书进行排序,可以有效增强索引之间的相关性。基于平方欧几里得距离提出一种新的码书按距离排序方法。与传统的按均值、方差和能量等排序方法相比,距离排序能大大提高索引图的相关性,使索引之间的偏移量向值小的方向明显集中。将距离排序后的码书用于AICS(adaptive index coding scheme)算法,实现了更好的压缩性能。  相似文献   

16.
Although the distance between binary codes can be computed fast in Hamming space, linear search is not practical for large scale datasets. Therefore attention has been paid to the efficiency of performing approximate nearest neighbor search, in which hierarchical clustering trees (HCT) are widely used. However, HCT select cluster centers randomly and build indexes with the entire binary code, this degrades search performance. In this paper, we first propose a new clustering algorithm, which chooses cluster centers on the basis of relative distances and uses a more homogeneous partition of the dataset than HCT has to build the hierarchical clustering trees. Then, we present an algorithm to compress binary codes by extracting distinctive bits according to the standard deviation of each bit. Consequently, a new index is proposed using compressed binary codes based on hierarchical decomposition of binary spaces. Experiments conducted on reference datasets and a dataset of one billion binary codes demonstrate the effectiveness and efficiency of our method.  相似文献   

17.
18.
论文提出了一种利用Hopfield网络的码本设计方法,分析了LBG算法和离散Hopfield网络的特点,针对该特点构造聚类表格,并按离散Hopfield神经网络串行方式运行,从而得到最终码字集。通过实验表明,在码本大小相同的情况下,峰值信噪比提高了2.742~3.825 dB,生成的码本质量较传统的LBG算法更加有效。  相似文献   

19.
In this paper, we present a fast codebook re-quantization algorithm (FCRA) using codewords of a codebook being re-quantized as the training vectors to generate the re-quantized codebook. Our method is different from the available approach, which uses the original training set to generate a re-quantized codebook. Compared to the traditional approach, our method can reduce the computing time dramatically, since the number of codewords of a codebook being re-quantized is usually much smaller than the number of original training vectors. Our method first classifies codewords of a re-quantized codebook into static and active groups. This approach uses the information of codeword displacements between successive partitions to reject impossible candidates in the partition process of codebook re-quantization. By implementing a fast search algorithm used for vector quantization encoding (MFAUPI) in the partition step of FCRA, the computational complexity of codebook re-quantization can be further reduced significantly. Using MFAUPI, the computing time of FCRA can be reduced by a factor of 1.55–3.78. Compared with the available approach OARC (optimization algorithm for re-quantization codebook), our proposed method can reduce the codebook re-quantization time by a factor of about 8005 using a training set of six real images. This reduction factor is increased when the re-quantized codebook size and/or training set size are increased. It is noted that our proposed algorithm can generate the same re-quantized codebook as that produced by the OARC.  相似文献   

20.
In this paper, we present an approach to efficiently hide sensitive data in vector quantization (VQ) indices and reversibly extract sensitive data from encrypted code stream. The approach uses two patterns to compress VQ indices. When an index is equal to its upper neighbor’s index or left neighbor’s index, it is encoded by the corresponding equivalent index; otherwise, it is encoded by a modified VQ codebook mapping named as hierarchical state codebook mapping (HSCM). In the proposed scheme, the hierarchical state codebook mapping (HSCM) is main coding pattern and it is generated according to the side-match distortion method(SMD). By the above two patterns, the size of original code stream is reduced, and more storage space can be used to embed sensitive data. The experimental results indicated that the proposed scheme can achieve a higher embedding capacity than the previous state-of-the-art VQ-index-based data hiding methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号