首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到13条相似文献,搜索用时 0 毫秒
1.
This paper presents a modified block truncation coding scheme for the compression of images. We first design a set of binary edge patterns, which are visually significant, to approximate the bit plane of an image block. An interblock coding scheme, utilizing the spatial correlation between neighboring blocks, is then developed for coding of the sample mean and standard deviation of a block. Simulation results indicate that the bit rate is significantly reduced without introducing noticeable degradation in the reconstructed images.  相似文献   

2.
This paper describes a color image compression technique based on block truncation coding using pattern fitting (BTC-PF). High degree of correlation between the RGB planes of a color image is reduced by transforming them to O1O2O3 planes. Each Oi plane (1?i?3) is then encoded using BTC-PF method. Size of the pattern book and the block size are selected based on the information content of the corresponding plane. The result of the proposed method is compared with that of several BTC based methods and the former is found superior. Though this method is a spatial domain technique, it is also compared with JPEG compression method, which is one of most popular frequency domain techniques. It is found that the performance of the proposed method is a little inferior to that of the JPEG in terms of quality of the reconstructed image. Decoding time is another important criterion where the compressed image is decoded frequently for various purposes. As the proposed method requires negligible decoding time compared to JPEG, the former is preferred over the latter in those cases.  相似文献   

3.
Miguel  Hafida  Gonzalo  Francisco  Francisco 《Neurocomputing》2007,70(16-18):2828
The aim of this contribution is to implement a hardware module that performs parametric identification of dynamical systems. The design is based upon the methodology of optimization with Hopfield neural networks, leading to an adapted version of these networks. An outstanding feature of this modified Hopfield network is the existence of weights that vary with time. Since weights can no longer be stored in read-only memories, these dynamic weights constitute a significant challenge for digital circuits, in addition to the usual issues of area occupation, fixed-point arithmetic and nonlinear functions computations. The implementation, which is accomplished on FPGA circuits, achieves modularity and flexibility, due to the usage of parametric VHDL to describe the network. In contrast to software simulations, the natural parallelism of neural networks is preserved, at a limited cost in terms of circuitry cost and processing time. The functional simulation and the synthesis show the viability of the design. In particular, the FPGA implementation exhibits a reasonably fast convergence, which is required to produce accurate parameter estimations. Current research is oriented towards integrating the estimator within an embedded adaptive controller for autonomous systems.  相似文献   

4.
In this paper, we proposed multi-factors correlation (MFC) to describe the image, structure element correlation (SEC), gradient value correlation (GVC) and gradient direction correlation (GDC). At first, the RGB color space image is converted to a bitmap image and a mean color component image utilizing the block truncation coding (BTC). Then, three correlations will be used to extract the image feature. The structure elements can effectively represent the bitmap which is generated by BTC, and SEC can effectively denote the bitmap?s structure and the correlation of the block in the bitmap. GVC and GDC can effectively denote the gradient relation, which is computed by a mean color component image. Formed by SEC, GVC and GDC, the image feature vectors can effectively represent the image. In the end, the results demonstrate that the method has better performance than other image retrieval methods in the experiment.  相似文献   

5.
This paper presents an effective compression method suitable for transmission the still images on public switching telephone networks (PSTN). Since compression algorithm reduce the number of pixels or the gray levels of a source picture, therefore this will lead to the reduction of the amount of memory needed to store the source information or the time necessary for transmitting by a channel with a limited bandwidth. First, we introduced some current standards and finally the lossy DCT-based JPEG compression method is chosen. According to our studies, this method is one of the suitable methods. However, it is not directly applicable for image transmission on usual telephone lines (PSTN). Therefore, it must be modified considerably to be suitable for our purposes. From Shannon’s Information Theory, we know that for a given information source like an image there is a coding technique which permits a source to be coded with an average code length as close as to the entropy of the source as desired. So, we have modified the Huffman coding technique and obtained a new optimized version of this coding, which has a high speed and is easily implemented. Then, we have applied the DCT1 and the FDCT2 for compression of the data. We have analyzed and written the programs in C++ for image compression/decompression, which give a very high compression ratio (50:1 or more) with an excellent SNR.3In this paper, we present the necessary modifications on Huffman coding algorithms and the results of simulations on typical images.  相似文献   

6.
F.  A.  R.  R.   《Journal of Systems Architecture》2009,55(5-6):310-316
A high-performance configurable multi-channel counter is presented. The system has been implemented on a small-size and low-cost Commercial-Off-The-Shelf (COTS) FPGA/DSP-based board, and features 64 input channels, a maximum counting rate of 45 MHz, and a minimum integration window (time resolution) of 24 μs with a 23 b counting depth. In particular, the time resolution depends on both the selected counting bit-depth and the number of acquisition channels: indeed, with a 8 b counting depth, the time resolution reaches the value of 8 μs if all the 64 input channels are enabled, whereas it lowers to 378 ns if only 2 channels are used. Thanks to its flexible architecture and performance, the system is suitable in highly demanding photon counting applications based on SPAD arrays, as well as in many other scientific experiments. Moreover, the collected counting results are both real-time processed and transmitted over a high-speed IEEE 1394 serial link. The same link is used to remotely set up and control the entire acquisition process, thus giving the system a even higher degree of flexibility. Finally, a theoretical model of general use which immediately provides the overall system performance is described. The model is then validated by the reported experimental results.  相似文献   

7.
An adaptive image interpolation algorithm for image/video processing   总被引:6,自引:0,他引:6  
Image interpolation is one of the key technologies in image/video processing. In this study, a new adaptive image interpolation algorithm is proposed. The objective of the proposed approach is to recover up-sampled image frames from the corresponding decimated (low-resolution) image frames. In the proposed approach, within each iteration, two proposed nonlinear filters are utilized to iteratively generate high-frequency components lost within the decimation procedure. Finally, a post-processing procedure is adopted to reduce the blocking artifacts within the interpolated images. Based on the experimental results obtained in this study, in terms of the average PSNRp (peak signal-to-noise ratio) in dB and subjective measure of the quality of the interpolated images, the interpolation results by the proposed approach are better than that by three existing interpolation approaches for comparison.  相似文献   

8.
This report describes the design of a modular, massive-parallel, neural-network (NN)-based vector quantizer for real-time video coding. The NN is a self-organizing map (SOM) that works only in the training phase for codebook generation, only at the recall phase for real-time image coding, or in both phases for adaptive applications. The neural net can be learned using batch or adaptive training and is controlled by an inside circuit, finite-state machine-based hard controller. The SOM is described in VHDL and implemented on electrically (FPGA) and mask (standard-cell) programmable devices.  相似文献   

9.
The trade-off between image fidelity and coding rate is reached with several techniques, but all of them require an ability to measure distortion. The problem is that finding a general enough measure of perceptual quality has proven to be an elusive goal. Here, we propose a novel technique for deriving an optimal compression ratio for lossy coding based on the relationship between information theory and the problem of testing hypotheses. The best achievable compression ratio determines a boundary between achievable and non-achievable regions in the trade-off between source fidelity and coding rate. The resultant performance bound is operational in that it is directly achievable by a constructive procedure, as suggested in a theorem that states the relationship between the best achievable compression ratio and the Kullback-Leibler information gain. As an example of the proposed technique, we analyze the effects of lossy compression at the best achievable compression ratio on the identification of breast cancer microcalcifications.  相似文献   

10.
In this article, the researcher introduces a hybrid chain code for shape encoding, as well as lossless and lossy bi-level image compression. The lossless and lossy mechanisms primarily depend on agent movements in a virtual world and are inspired by many agent-based models, including the Paths model, the Bacteria Food Hunt model, the Kermack–McKendrick model, and the Ant Colony model. These models influence the present technique in three main ways: the movements of agents in a virtual world, the directions of movements, and the paths where agents walk. The agent movements are designed, tracked, and analyzed to take advantage of the arithmetic coding algorithm used to compress the series of movements encountered by the agents in the system. For the lossless mechanism, seven movements are designed to capture all the possible directions of an agent and to provide more space savings after being encoded using the arithmetic coding method. The lossy mechanism incorporates the seven movements in the lossless algorithm along with extra modes, which allow certain agent movements to provide further reduction. Additionally, two extra movements that lead to more substitutions are employed in the lossless and lossy mechanisms. The empirical outcomes show that the present approach for bi-level image compression is robust and that compression ratios are much higher than those obtained by other methods, including JBIG1 and JBIG2, which are international standards in bi-level image compression. Additionally, a series of paired-samples t-tests reveals that the differences between the current algorithms’ results and the outcomes from all the other approaches are statistically significant.  相似文献   

11.
Low-latency transmissions of high resolution video such as HD, 2K, or 4K over both Internet and private IP networks have found a foothold in many interactive applications, ranging from collaborative environments in science and medicine to the arts and entertainment industry. In this paper we demonstrate how the power of commodity graphics processing units can be used for efficient implementation of JPEG and DXT compression. We propose an approach to fine-grained parallelization of JPEG compression and the use of auxiliary indexes for efficient decompression, which are backward compatible with the JPEG standard. In-depth performance analysis is provided to show various aspects of the proposed parallelization including the dependency on image content and on various settings of the compression algorithm, as well as the impact of compression for interactive applications in terms of end-to-end latency. The applicability of these compression schemes in medicine and cinematography has also been evaluated using double-blind ABX tests compared with uncompressed video. We describe selected successful real-world deployment use cases based on our open-source implementation within the UltraGrid framework, such as trans-Atlantic 4K interactive video streaming during the CineGrid 2011 workshop. As discussed in the paper, the proposed approaches to parallelization provide sufficient performance even for the future generation of 8K video processing systems, currently limited by availability of camera and display hardware.  相似文献   

12.
段玉红  高岳林 《计算机应用》2008,28(6):1559-1562
将局部寻优能力极强的人工Hopfield神经网络算法融合到粒子群优化算法的搜索过程中,提出解决一类0/1优化问题融合神经网络的混合粒子群优化算法。在该算法中依粒子群当前全局最优个体为初始态激活神经网络,生成一个局部最优态,用这个局部最优态代替粒子群当前全局最优个体,增强了算法的局部寻优能力,通过数值试验证明该算法是有效的。  相似文献   

13.
Data compression techniques have long been assisting in making effective use of disk, network and other resources. Most compression utilities require explicit user action for compressing and decompressing of file data. However, there are some systems in which compression and decompression of file data is done transparently by the operating system. A compressed file requires fewer sectors for storage on the disk. Hence, incorporating data compression techniques into a file system gives the advantage of a larger effective disk space. At the same time, the additional time needed for compression and decompression of file data gets compensated for to a large extent by the time gained because of fewer disk accesses. In this paper we describe the design and implementation of a file system for the Linux kernel, with the feature of on‐the‐fly data compression and decompression in a fashion that is transparent to the user. We also present some experimental results which show that the performance of our file system is comparable to that of Ext2fs, the native file system for Linux. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号