首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 562 毫秒
1.
波形码书的二次设计方法研究   总被引:3,自引:0,他引:3  
张雪英  张刚 《通信学报》2000,21(4):80-83
一个实用的矢量量化码书应该具有体积小、代表性强的特点 ,本文提出了对已知波形码书进行二次设计的两种方法 ,一种是基于码字使用频率 ,另一种是基于码字能量 ,二者都可降低码书复杂度 ,获得高质合成语音。进一步的分析揭示了两种方法的联系。  相似文献   

2.
蒋刚毅  郑义 《电子学报》1995,23(11):55-59
本文针对语音信号的矢量量化码书进行了研究,利用矢量量化码书的码字和值及码字差值作为码书的特征变量,对不同语言信号的矢量量化码书分布情况做了分析,并给出了基本结果。  相似文献   

3.
田斌  易克初  孙民贵 《电子学报》2000,28(10):12-16
本文提出一种矢量压缩编码新方法—线上投影法.它将输入矢量用它在某条空间直线上的投影近似表示,而用决定这条直线的两个参考点的序号和一个反映该投影点相对于两参考点位置的比例因子作为编码.由于一个大小为N的矢量量化码书中的码字可以确定N(N-1)/2条直线,因此这种方法可用较小的码书获得很高的编码精度.理论分析和实验结果表明:码书大小为N的线上投影法的编码精度与码书大小为N2的矢量量化法相当,并且明显优于用两个大小为N的码书构成的两级矢量量化法,而其码书生成和编码过程的计算复杂度均远远低于后者.它将是矢量信号高精度压缩编码的一种强有力的手段.  相似文献   

4.
针对彩色视频图像提出了一种DCT域内基于矢量量化的高效编码方法.为去掉彩色图像各分量间的相关性,首先将图像由RGB空间转换到YUV空间,然后根据人类视觉特征(HVS)对色度信号U、V进行了亚采样和平均化处理;对亮度信号Y则进行分块DCT变换,并根据HVS特征对变化域内的块矢量进行自适应分类,然后根据矢量的类型分别构造码矢和进行全局码书设计.提出的全局码书设计方案可以根据帧间相关性及码字使用频率,对码书的内容自动进行更新和替换,以适应场景内容的变化.实验结果表明:在保证图像重建质量的前提下,本文提出的方法具有较高的压缩效率,比较适合于视频会议以及水下视频观测等应用场合.  相似文献   

5.
基于Hadamard变换和自适应顺序搜索的码字快速搜索算法   总被引:2,自引:2,他引:0  
提出了一种Hadamard域中改进的快速码字搜索算法.在已离线按照码字第一维分量的大小进行了排序的码书中,首先找出与输入矢量第一维分量最接近的L个初始候选码字,求出对应的L个Chebyshev距离,接着按自适应的方法在这L个码字之外进行上下搜索,并用新找到的具有更小Chebyshev距离的码字来更新这L个候选码字,以便得到全体码书中L个具有最小Chebyshev距离的最终候选码字.最后用PDS算法在这L个最终候选码字中找出Euclidean距离最小的码字作为最佳匹配码字.实验表明文中算法相比本文算法在保证PSNR性能无任何下降的前提下,明显减少了算法的计算量,有效地提高了编码速度.  相似文献   

6.
等和值块扩展最近邻搜索算法(EBNNS)是一种快速矢量量化码字搜索算法,该算法首先将码书按和值大小排序分块,编码时查找与输入矢量和值距离最近的码书块中间码字,并将它作为初始匹配码字.然后在该码字附近上下扩展搜索相邻码字中距输入矢量最近的码字,最后将搜索到的最匹配码字在码书中的地址输出.同时本文对该算法进行了FPGA设计.设计时采用串并结合和流水线结构,折中考虑了硬件面积和速度.结果表明针对所用FPGA器件Xilinx xc2v1000,整个系统最大时钟频率可达88.36MHz,图像处理速度约为2.2 MPixel/s.  相似文献   

7.
针对彩色视频图像提出了一种DCT域内基于矢量量化的高效编码方法。为去掉彩色图像各分量间的相关性,首先将图像由RGB空间转换到YUV空间,然后根据人类视觉特征(HVS)对色度信号U、V进行了亚采样和平均化处理;对亮度信号Y则进行分块DCT变换,并根据HVS特征对变化域内的块矢量进行自适应分类,然后根据矢量的类型分别构造码矢和进行全局码书设计。提出的全局码书设计方案可以根据帧间相关性及码字使用频率,对码书的内容自动进行更新和替换,以适应场景内容的变化。实验结果表明:在保证图像重建质量的前提下,本文提出的方法具有较高的压缩效率,比较适合于视频会议以及水下视频观测等应用场合。  相似文献   

8.
姜来  许文焕  纪震  张基宏 《电子学报》2006,34(9):1738-1741
本文给出了一种新的图像矢量量化码书的优化设计方法.传统矢量量化方法只考虑了码字与训练矢量之间的吸引影响,所以约束了最优解的寻解空间.本文提出了一种新的学习机理--模糊强化学习机制,该机制在传统的吸引因子基础上,引入新的排斥因子,极大地释放了吸引因子对最优解的寻解空间的约束.新的模糊强化学习机制没有采用引入随机扰动的方法来避免陷入局部最优码书,而是通过吸引因子和排斥因子的合力作用,较准确地确定了每个码字的最佳移动方向,从而使整体码书向全局最优解靠近.实验结果表明,基于模糊强化学习机制的矢量量化算法始终稳定地取得显著优于模糊K-means算法的性能,较好地解决了矢量量化中的码书设计容易陷入局部极小和初始码书影响优化结果的问题.  相似文献   

9.
一种矢量量化码书搜索的快速算法   总被引:6,自引:2,他引:4       下载免费PDF全文
本文提出了一种采用均方误差(MSE)测度的矢量量化码书搜索的快速算法.该算法在码书设计的每次迭代前预先计算各码字的和值(一个矢量各分量的和)并保存在码书中.在迭代过程中,利用输入矢量的和值、各码字的和值以及均方误差三者之间的各种特性排除大部分候选码字而免去许多均方误差计算.测试结果表明,相对于穷尽搜索方法,计算量得到明显的降低,计算时间减少约90%,同时只需要很少的预先计算量和额外存储量.  相似文献   

10.
一种快速模糊矢量量化图像编码算法   总被引:5,自引:3,他引:2  
张基宏  谢维信 《电子学报》1999,27(2):106-108
本文在学习矢量量化和模糊矢量量化算法的基础上,设计了一种新的训练矢量超球体收缩方案和码书学习公式,提出了一种快速模糊矢量量化算法。该算法具有对初始码书选取信赖性小,不会陷入局部最小和运算最小的优点。实验表明,FFVQ设计的图像码书性能与FVA算法相比,训练时间大大缩短,峰值信噪比也有改善。  相似文献   

11.
We introduce a universal quantization scheme based on random coding, and we analyze its performance. This scheme consists of a source-independent random codebook (typically mismatched to the source distribution), followed by optimal entropy coding that is matched to the quantized codeword distribution. A single-letter formula is derived for the rate achieved by this scheme at a given distortion, in the limit of large codebook dimension. The rate reduction due to entropy coding is quantified, and it is shown that it can be arbitrarily large. In the special case of "almost uniform" codebooks (e.g., an independent and identically distributed (i.i.d.) Gaussian codebook with large variance) and difference distortion measures, a novel connection is drawn between the compression achieved by the present scheme and the performance of "universal" entropy-coded dithered lattice quantizers. This connection generalizes the "half-a-bit" bound on the redundancy of dithered lattice quantizers. Moreover, it demonstrates a strong notion of universality where a single "almost uniform" codebook is near optimal for any source and any difference distortion measure. The proofs are based on the fact that the limiting empirical distribution of the first matching codeword in a random codebook can be precisely identified. This is done using elaborate large deviations techniques, that allow the derivation of a new "almost sure" version of the conditional limit theorem.  相似文献   

12.
提出一种新的低功耗图像及视频编解码算法。该算法主要基于矢量量化,认为编码算法的质量和功耗地码本尺寸的大小,通过采用小尺寸码本,降低算法所需要的内存数量,从而降低功耗。编码时,利用分形理论中的同构变换计算虚拟码本,弥补由于采用小码本造成的图像质量损失,并使编码过程较少依赖于码本内存。编解码结果与全搜索型矢量量化算法相比,在不损失图像质量的前提下,可以极大地降低编解码功耗。  相似文献   

13.
Embedded Algebraic Vector Quantization (EAVQ) is a fast and efficient Lattice Vector Quantization (LVQ) scheme used in low-bitrate audio coding. However, a defect of EAVQ is the overload distortion which causes unpleasant noises in audio coding. To solve this problem, specific base codebook extension schemes should be carefully considered. In this letter, we present a novel EAVQ codebook extension scheme—Split Table Extension (STE), which splits a vector into two smaller vectors: one in the base codebook and the other in the split table. The base codebook and the split table are designed according to the appearance probability of quantized vectors in audio segments. Experiments on encoding multiple audio and speech sequences show that, compared with the existed Voronoi Extension scheme, STE greatly reduces computation complexity and storage requirement while achieving similar coding quality.   相似文献   

14.
In this paper, we address the coding problem for adaptive coding and modulator indicators in communication systems where users are divided into several classes according to their channel quality. Two novel methods are described to construct codebooks with variable length codewords forsuch an application. The proposed constructions satisfy all constraints of the system model, showing considerable gain in both the maximal and average length of codebook with respect to the current state of the art. The methodology includes a systematic way for constructing variable length codebooks where codewords are not uniformly distributed in the space, and thus, some codewords are more protected than others. The proposed construction can be easily adapted, by zero padding, to obtain a fixed block‐length code, with length equal to the maximal length of the designed variable‐length code but still smaller than that of the best state‐of‐the‐art code. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, a novel algorithm for low-power image coding and decoding is presented and the various inherent trade-offs are described and investigated in detail. The algorithm reduces the memory requirements of vector quantization, i.e., the size of memory required for the codebook and the number of memory accesses by using small codebooks. This significantly reduces the memory-related power consumption, which is an important part of the total power budget. To compensate for the loss of quality introduced by the small codebook size, simple transformations are applied on the codewords during coding. Thus, small codebooks are extended through computations and the main coding task becomes computation-based rather than memory-based. Each image block is encoded by a codeword index and a set of transformation parameters. The algorithm leads to power savings of a factor of 10 in coding and of a factor of 3 in decoding, at least in comparison to classical full-search vector quantization. In terms of SNR, the image quality is better than or comparable to that corresponding to full-search vector quantization, depending on the size of the codebook that is used. The main disadvantage of the proposed algorithm is the decrease of the compression ratio in comparison to vector quantization. The trade-off between image quality and power consumption is dominant in this algorithm and is mainly determined by the size of the codebook.  相似文献   

16.

Hiding a secret message in a cover image with the least possible statistical detectability is the objective of steganography. This is generally formulated as a problem of minimal-distortion embedding and practically implemented by incorporating an efficient coding method and a well-designed distortion metric. In this paper, we construct a new distortion metric for JPEG steganography, which employs a local linear predictor to generate both the intra- and inter-block prediction errors of a quantized DCT coefficeint, and then accumulates them to form the distortion metric for this coefficient. Such distortion metric is then further integrated in the minimal-distortion framework using STC to give rise to the proposed JPEG steganographic scheme. This scheme exploits the proposed distortion metric to guide the STC to hide the secret message in those quantized DCT coefficients with minimal distortion cost. Consequently, the average changes of both first- and second-order statistics of quantized DCT coefficients and thus the statistical detectability would decrease significantly. Compared with prior arts, experimental results demonstrate the effectiveness of the proposed scheme in terms of secure embedding capacity against steganalysis.

  相似文献   

17.
A new on-line universal lossy data compression algorithm is presented. For finite memoryless sources with unknown statistics, its performance asymptotically approaches the fundamental rate distortion limit. The codebook is generated on the fly, and continuously adapted by simple rules. There is no separate codebook training or codebook transmission. Candidate codewords are randomly generated according to an arbitrary and possibly suboptimal distribution. Through a carefully designed “gold washing” or “information-theoretic sieve” mechanism, good codewords and only good codewords are promoted to permanent status with high probability. We also determine the rate at which our algorithm approaches the fundamental limit  相似文献   

18.
In earlier publications, we have presented two coding schemes which take into account the conditional statistics of input signals. In the first scheme, the codewords are assigned in such a way as to provide a signal with long runs of zeros and ones. In the second scheme, each picture element is coded by variable-length codewords according to the values of previously transmitted PEL's. In this paper, by providing further results, we examine these coding schemes in greater detail. The performance of both schemes in terms of entropy and bit rate are compared with an optimum predictive coder. The simulation results indicate that these schemes have a significant advantage over standard predictive encoders. Methods to reduce the storage requirement for the encoder and decoder codebooks are also discussed.  相似文献   

19.
The Linde-Buzo-Gray (LBG) algorithm is usually used to design a codebook for encoding images in vector quantization. In each iteration of this algorithm, one must search the full codebook in order to assign the training vectors to their corresponding codewords. Therefore, the LBG algorithm needs a large computation effort to obtain a good codebook from the training set. The authors propose a finite-state LBG (FSLBG) algorithm for reducing the computation time. Instead of searching the entire codebook, they search only those codewords that are close to the codeword for a training vector in the previous iteration. In general, the number of these possible codewords can be made very small without sacrificing performance. By only searching a small part of the codebook, the computation time is reduced. In experiments, the performance of the FSLBG algorithm in terms of signal-to-noise ratio is very close to that of the LBG algorithm. However, the computation time of the FSLBG algorithm is about 10% of the time required by the LBG algorithm  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号