首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 750 毫秒
1.
量化是压缩卷积神经网络、加速卷积神经网络推理的主要方法.现有的量化方法大多将所有层量化至相同的位宽,混合精度量化则可以在相同的压缩比下获得更高的准确率,但寻找混合精度量化策略是很困难的.为解决这种问题,提出了一种基于强化学习的卷积神经网络混合截断量化方法,使用强化学习的方法搜索混合精度量化策略,并根据搜索得到的量化策略混合截断权重数据后再进行量化,进一步提高了量化后网络的准确率.在ImageNet数据集上测试了ResNet18/50以及MobileNet-V2使用此方法量化前后的Top-1准确率,在COCO数据集上测试了YOLOV3网络量化前后的mAP.与HAQ, ZeroQ相比, MobileNet-V2网络量化至4位的Top-1准确率分别提高了2.7%和0.3%;与分层量化相比, YOLOV3网络量化至6位的mAP提高了2.6%.  相似文献   

2.
Recurrent neural networks have proved to be an effective method for statistical language modeling. However, in practice their memory and run-time complexity are usually too large to be implemented in real-time offline mobile applications. In this paper we consider several compression techniques for recurrent neural networks including Long–Short Term Memory models. We make particular attention to the high-dimensional output problem caused by the very large vocabulary size. We focus on effective compression methods in the context of their exploitation on devices: pruning, quantization, and matrix decomposition approaches (low-rank factorization and tensor train decomposition, in particular). For each model we investigate the trade-off between its size, suitability for fast inference and perplexity. We propose a general pipeline for applying the most suitable methods to compress recurrent neural networks for language modeling. It has been shown in the experimental study with the Penn Treebank (PTB) dataset that the most efficient results in terms of speed and compression–perplexity balance are obtained by matrix decomposition techniques.  相似文献   

3.
文档图像作为图像的一种,在生活中的应用越来越广泛,然而其又不同于常规的文本文档或图像,它主要由具有特定含义的不同形状的字符串组成,其局部像素变化比较剧烈,高频分量相对丰富,采用常规的压缩方式很难获得较高的压缩率。常用的压缩方式没有考虑文档图像的特殊性,因而压缩性能有限。本文针对文档图像的特点,采用分块匹配的方法对文档图像进行压缩,即按照特定的规则对整幅图像进行分割,然后将分割的块图像进行分类并编码,从而在二维空间上消除了文档图像的相关性,获得了远高于常规无损压缩方式的压缩率。文中对分块匹配算法进行了描述,并对其性能进行了理论分析和仿真。  相似文献   

4.
Semistatic byte‐oriented word‐based compression codes have been shown to be an attractive alternative to compress natural language text databases, because of the combination of speed, effectiveness, and direct searchability they offer. In particular, our recently proposed family of dense compression codes has been shown to be superior to the more traditional byte‐oriented word‐based Huffman codes in most aspects. In this paper, we focus on the problem of transmitting texts among peers that do not share the vocabulary. This is the typical scenario for adaptive compression methods. We design adaptive variants of our semistatic dense codes, showing that they are much simpler and faster than dynamic Huffman codes and reach almost the same compression effectiveness. We show that our variants have a very compelling trade‐off between compression/decompression speed, compression ratio, and search speed compared with most of the state‐of‐the‐art general compressors. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
基于Marching Cubes重组的外存模型渐进压缩   总被引:7,自引:0,他引:7  
刘迎  蔡康颖  王文成  吴恩华 《计算机学报》2004,27(11):1457-1463
外存模型是指其规模远远超出内存容量的海量模型.为提高其存储、传输、显示等操作的效率,对外存模型进行渐进式的压缩是非常重要的.但当前已有的外存模型压缩算法都是单一层次的,不能做到渐进压缩.为此,该文提出一种针对外存模型的渐进压缩方法.能高效地压缩外存模型,并进行多分辨率的传输和显示.该方法首先将外存模型的包围盒空间按照八叉树形式进行剖分和层次化组织,使得最精细层次的各个立方块空间中的局部模型都能完全装入内存进行处理;然后,在各个立方块中对局部的模型进行基于Marching Cubes方式的重新拟合,并在此基础上建立各个局部的自适应八叉树;最后,基于各个局部自适应的八叉树,由粗至细渐进地遍历全局自适应八叉树的各个节点,并利用对内存模型能高效渐进压缩编码的先进方法进行编码压缩.实验表明,该方法对外存模型的压缩比达到了与处理内存模型相似的压缩比.高于目前的外存模型压缩方法,是第一个能渐进压缩外存模型的方法.  相似文献   

6.
William H. Hsu  Amy E. Zwarico 《Software》1995,25(10):1097-1116
We present a compression technique for heterogeneous files, those files which contain multiple types of data such as text, images, binary, audio, or animation. The system uses statistical methods to determine the best algorithm to use in compressing each block of data in a file (possibly a different algorithm for each block). The file is then compressed by applying the appropriate algorithm to each block. We obtain better savings than possible by using a single algorithm for compressing the file. The implementation of a working version of this heterogeneous compressor is described, along with examples of its value toward improving compression both in theoretical and applied contexts. We compare our results with those obtained using four commercially available compression programs, PKZIP, Unix compress, Stufflt, and Compact Pro, and show that our system provides better space savings.  相似文献   

7.
To achieve a high compression ratio when storing a large amount of Chinese characters is a problem in applications using Chinese characters. In this paper, we present a data compression software system which can reduce the storage requirement of Chinese characters and binary images. This system is composed of a Chinese font compressing subsystem and a binary image compressing subsystem. In the Chinese font compressing subsystem we have improved the traditional methods of feature extraction and classification techniques so as to achieve a high compression ratio in the compression of Chinese characters. Based on the technique of this subsystem, we have combined the block segmentation technique to compress a binary image. In addition to achieving a good data compression ratio the encoding-decoding process is computationally very efficient in our system. We also show that with our system the quality and accuracy of the reconstructed pattern and image are very close to those of the original pattern and image.  相似文献   

8.
提出一种类似于字典索引的编码压缩方法,将与参考数据块相容的测试数据块用"1"标记来压缩测试数据,解压体系结构只需要一个有限状态机和一个与数据块等长的循环扫描移位寄存器.与在Golomb码和FDR码中所需要的与测试向量等长的循环扫描移位寄存器相比,该方法的硬件开销较小.实验结果表明,该方法可以有效地压缩测试数据,且效果优于Golomb码和FDR码.  相似文献   

9.
卓越 《计算机应用研究》2021,38(5):1463-1467
如何在计算能力和存储能力有限的移动或嵌入式设备中部署神经网络是神经网络发展过程中必须面对的一个问题。为了压缩模型大小和减轻计算压力,提出了一种基于信息瓶颈理论的神经网络混合压缩方案。以信息瓶颈理论为基础,找到相邻神经网络层之间冗余信息,并以此为基础修剪冗余的神经元,然后对剩余的神经元进行三值量化,从而进一步减少模型存储所需内存。实验结果表明,在MNIST和CIFAR-10数据集上与同类算法对比,所提方法具有更高的压缩率和更低的计算量。  相似文献   

10.
11.
We present a novel deep learning‐based method for fast encoding of textures into current texture compression formats. Our approach uses state‐of‐the‐art neural network methods to compute the appropriate encoding configurations for fast compression. A key bottleneck in the current encoding algorithms is the search step, and we reduce that computation to a classification problem. We use a trained neural network approximation to quickly compute the encoding configuration for a given texture. We have evaluated our approach for compressing the textures for the widely used adaptive scalable texture compression format and evaluate the performance for different block sizes corresponding to 4 × 4, 6 × 6 and 8 × 8. Overall, our method (TexNN) speeds up the encoding computation up to an order of magnitude compared to prior compression algorithms with very little or no loss in the visual quality.  相似文献   

12.
第二代小波在医学图象无损压缩中的应用   总被引:9,自引:0,他引:9  
研究了第二代小波在医学图象中的方法,并在计算机上用软件模拟实现了相应的算法,最后给出了实验结果以及与其他压缩方法的比较,从中可以看出这一新方法在医学图象无损压缩中具有良好潜力和巨大的应用前景。  相似文献   

13.
当前大多数的先加密后压缩ETC(encryption-then-compression)方法只能够获得有限固定的压缩率,而无法获取到实际需求的任意压缩率。针对此问题提出一种具有任意压缩率的加密彩色图像有损压缩算法,该算法采用均匀下采样和随机下采样有机结合的方式对加密图像进行压缩,以获得加密图像的任意压缩率。接收方接收到加密图像的压缩序列后通过解压解密获得解密图像,随后把从解密图像有损重构原始图像的过程表征为一个结合下采样压缩方式约束的最优化问题,并设计一种基于卷积神经网络的有损ETC系统图像重构模型ETRN(ETC-oriented reconstruction network)来求解该优化问题。ETRN模型包含浅层特征提取层SFE(shallow feature extraction)、残差堆叠模块RIR(residual in residual)、残差信息补充模块RCS(residual content supplementation)、下采样约束模块DC(down-sampling constraint)。实验仿真结果表明,提出的加密彩色图像有损压缩算法能够获得优秀的加密压缩和重构性能,充分体现了该方法的可行性和有效性。  相似文献   

14.
面向XPath执行的XML数据流压缩方法   总被引:13,自引:0,他引:13  
由于XML(extensible markup language)本身是自描述的,所以XML数据流中存在大量冗余的结构信息.如何压缩XML数据流,使得在减少网络传输代价的同时有效支持压缩数据流上的查询处理,成为一个新的研究领域.目前已有的XML数据压缩技术,都需要扫描数据多遍,或者不支持数据流之上的实时查询处理.提出了一种XML数据流的压缩技术XSC(XML stream compression),实时完成XML数据流的压缩和解压缩,XSC动态构建XML元素事件序列字典并输出相关索引,能够根据XML数据流所遵从的DTD,产生XML元素事件序列图,在压缩扫描之前,产生更加合理的结构序列编码.压缩的XML数据流能够直接解压缩用于XPath的执行.实验表明,在XML数据流环境中,XSC在数据压缩率和压缩时间上要优于传统算法.同时,在压缩数据之上查询的执行代价是可以接受的.  相似文献   

15.
The demand for more effective compression, storage, and transmission of video data is ever increasing. To make the most effective use of bandwidth and memory, motion-compensated methods rely heavily on fast and accurate motion estimation from image sequences to compress not the full complement of frames, but rather a sequence of reference frames, along with differences between these frames which results from estimated frame-to-frame motion. Motivated by the need for fast and accurate motion estimation for compression, storage, and transmission of video as well as other applications of motion estimation, we present algorithms for estimating affine motion from video image sequences. Our methods utilize properties of the Radon transform to estimate image motion in a multiscale framework to achieve very accurate results. We develop statistical and computational models that motivate the use of such methods, and demonstrate that it is possible to improve the computational burden of motion estimation by more than an order of magnitude, while maintaining the degree of accuracy afforded by the more direct, and less efficient, 2-D methods.  相似文献   

16.
Indexing highly repetitive collections has become a relevant problem with the emergence of large repositories of versioned documents, among other applications. These collections may reach huge sizes, but are formed mostly of documents that are near-copies of others. Traditional techniques for indexing these collections fail to properly exploit their regularities in order to reduce space.We introduce new techniques for compressing inverted indexes that exploit this near-copy regularity. They are based on run-length, Lempel–Ziv, or grammar compression of the differential inverted lists, instead of the usual practice of gap-encoding them. We show that, in this highly repetitive setting, our compression methods significantly reduce the space obtained with classical techniques, at the price of moderate slowdowns. Moreover, our best methods are universal, that is, they do not need to know the versioning structure of the collection, nor that a clear versioning structure even exists.We also introduce compressed self-indexes in the comparison. These are designed for general strings (not only natural language texts) and represent the text collection plus the index structure (not an inverted index) in integrated form. We show that these techniques can compress much further, using a small fraction of the space required by our new inverted indexes. Yet, they are orders of magnitude slower.  相似文献   

17.
The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible.A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.  相似文献   

18.
External sorting of large files of records involves use of disk space to store temporary files, processing time for sorting, and transfer time between CPU, cache, memory, and disk. Compression can reduce disk and transfer costs, and, in the case of external sorts, cut merge costs by reducing the number of runs. It is therefore plausible that overall costs of external sorting could be reduced through use of compression. In this paper, we propose new compression techniques for data consisting of sets of records. The best of these techniques, based on building a trie of variable-length common strings, provides fast compression and decompression and allows random access to individual records. We show experimentally that our trie-based compression leads to significant reduction in sorting costs; that is, it is faster to compress the data, sort it, and then decompress it than to sort the uncompressed data. While the degree of compression is not quite as great as can be obtained with adaptive techniques such as Lempel-Ziv methods, these cannot be applied to sorting. Our experiments show that, in comparison to approaches such as Huffman coding of fixed-length substrings, our novel trie-based method is faster and provides greater size reductions. Preliminary versions of parts of this paper, not including the work on vargram compression” [41]  相似文献   

19.
This paper presents a principal component analysis (PCA) based data compression method for the image-base relighting (IBL) technology, which needs tremendous reference images to produce high quality rendering. The method contains two main steps, eigen-image based representation and eigen-image compression. We extract eigen-images by the cascade recursive least squared (CRLS) networks based PCA due to the large data dimension. By keeping only a few important eigen-images, which are enough to describe the IBL data set, the data size can be drastically reduced. To further reduce the data size, we use the embedded zero wavelet (EZW) approach to compress those retained eigen-images, and use uniform quantization plus arithmetic coding to compress the representing coefficients. Simulation results demonstrate that our approach is superior to that of compressing reference images separately with JPEG or EZW.  相似文献   

20.
提出了一种ARGB数据的无损压缩优化算法以及FPGA实现方法。为了避免对整个文件的解压和压缩,采用了Deflate算法的相关方法对图像按块进行压缩和解压,极大提高了存储器的访问效率。利用了Deflate算法对小块进行压缩,发挥了Deflate中LZ77压缩的Huffman压缩技术来优化压缩算法。通过VIVADO HLS将算法实现成FPGA电路,采用多张图片进行了实际应用,证实了算法的有效性,并分析了其功耗和时序信息。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号