首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A highly promising approach to assess the quality of an image involves comparing the perceptually important structural information in this image with that in its reference image. The extraction of the perceptually important structural information is however a challenging task. This paper employs a sparse representation-based approach to extract such structural information. It proposes a new metric called the sparse representation-based quality (SPARQ) index that measures the visual quality of an image. The proposed approach learns the inherent structures of the reference image as a set of basis vectors. These vectors are obtained such that any structure in the image can be efficiently represented by a linear combination of only a few of these basis vectors. Such a sparse strategy is known to generate basis vectors that are qualitatively similar to the receptive field of the simple cells present in the mammalian primary visual cortex. To estimate the visual quality of the distorted image, structures in the visually important areas in this image are compared with those in the reference image, in terms of the learnt basis vectors. Our approach is evaluated on six publicly available subject-rated image quality assessment datasets. The proposed SPARQ index consistently exhibits high correlation with the subjective ratings of all datasets and overall, performs better than a number of popular image quality metrics.  相似文献   

2.
In this paper, we propose a feature discovering method incorporated with a wavelet-like pattern decomposition strategy to address the image classification problem. In each level, we design a discriminative feature discovering dictionary learning (DFDDL) model to exploit the representative visual samples from each class and further decompose the commonality and individuality visual patterns simultaneously. The representative samples reflect the discriminative visual cues per class, which are beneficial for the classification task. Furthermore, the commonality visual elements capture the communal visual patterns across all classes. Meanwhile, the class-specific discriminative information can be collected by the learned individuality visual elements. To further discover the more discriminative feature information from each class, we then integrate the DFDDL into a wavelet-like hierarchical architecture. Due to the designed hierarchical strategy, the discriminative power of feature representation can be promoted. In the experiment, the effectiveness of proposed method is verified on the challenging public datasets.  相似文献   

3.
基于MATLAB的DCT变换在JPEG图像压缩中的应用   总被引:3,自引:2,他引:1  
介绍了JPEG图像压缩算法,并在MATLAB数学分析工具环境下从实验角度出发,较为直观地探讨了DCT在JPEG图像压缩中的应用。仿真实验表明,用MATLAB来实现离散余弦变换的图像压缩,具有方法简单、速度快、误差小的优点,大大提高了图像压缩的效率和精度。  相似文献   

4.
基于DCT的图像压缩及MATLAB实现   总被引:1,自引:0,他引:1  
罗晨 《电子设计工程》2011,19(18):168-170
介绍JPEG图像压缩算法,并在MATIAB数学分析工具环境下从实验角度出发,较为直观地探讨了DCT在JPEG图像压缩中的应用。仿真实验表明,用MATLAB来实现离散余弦变换的图像压缩,具有方法简单、速度快、误差小的优点,大大提高了图像压缩的效率和精度。  相似文献   

5.
基于稀疏表示的分类方法由于其所具有的简单性和有效性获得了研究者的广泛关注,然而如何建立字典原子与类别信息间的联系仍然是一个重要的问题,与此同时大部分稀疏表示分类方法都需要求解受范数约束的优化问题,使得分类任务的计算较复杂。为解决上述问题,该文提出一种新的基于Fisher约束的字典对学习方法。新方法联合学习结构化综合字典和结构化解析字典,然后通过样本在解析字典上的映射直接求解稀疏系数矩阵;同时采用Fisher判别准则编码系数使系数具有一定的判别性。最后将新方法应用到图像分类中,实验结果表明新方法在提高分类准确率的同时还大大降低了计算复杂度,相较于现有方法具有更好的性能。  相似文献   

6.
L1跟踪对适度的遮挡具有鲁棒性,但是存在速度慢和易产生模型漂移的不足。为了解决上述两个问题,该文首先提出一种基于稀疏稠密结构的鲁棒表示模型。该模型对目标模板系数和小模板系数分别进行L2范数和L1范数正则化增强了对离群模板的鲁棒性。为了提高目标跟踪速度,基于块坐标优化原理,用岭回归和软阈值操作建立了该模型的快速算法。其次,为降低模型漂移的发生,该文提出一种在线鲁棒的字典学习算法用于模板更新。在粒子滤波框架下,用该表示模型和字典学习算法实现了鲁棒快速的跟踪方法。在多个具有挑战性的图像序列上的实验结果表明:与现有跟踪方法相比,所提跟踪方法具有较优的跟踪性能。  相似文献   

7.
基于DCT与SPIHT的数字图像压缩技术研究   总被引:3,自引:0,他引:3  
郭海静  葛万成 《信息技术》2005,29(12):10-11,15
研究基于离散余弦变换DCT(Discrete Cosine Transfonn)与多级树集合分裂算法SPIHT(Set Partitioning in Hierarchical Trees)的数字图像压缩技术,实验结果显示,该技术能够取得与JPEG标准相近甚至更优的压缩效果。  相似文献   

8.
基于DCT变换的图像压缩技术研究   总被引:1,自引:0,他引:1  
沈洁  杜宇人  殷玲玲  王慧 《信息技术》2006,30(10):133-135
图像DCT变换是图像压缩的一项重要技术,如何准确、快速的进行图像压缩一直是国内外研究的热点。概要的论述了图像DCT变换的概念和特点,对基于DCT变换的图像压缩技术的算法进行了研究,并用MATLAB进行了算法仿真,取得了较为理想的效果。  相似文献   

9.
In this paper, an Entropy-constrained dic-tionary learning algorithm (ECDLA) is introduced for e?-cient compression of Synthetic aperture radar (SAR) com-plex images. ECDLA RI encodes the Real and imaginary parts of the images using ECDLA and sparse representa-tion, and ECDLA AP encodes the Amplitude and phase parts respectively. When compared with the compression method based on the traditional Dictionary learning al-gorithm (DLA), ECDLA RI improves the Signal-to-noise ratio (SNR) up to 0.66dB and reduces the Mean phase error (MPE) up to 0.0735 than DLA RI. With the same MPE, ECDLA AP outperforms DLA AP by up to 0.87dB in SNR. Furthermore, the proposed method is also suitable for real-time applications.  相似文献   

10.
Most digital cameras are overlaid with color filter arrays (CFA) on their electronic sensors, and thus only one particular color value would be captured at every pixel location. When producing the output image, one needs to recover the full color image from such incomplete color samples, and this process is known as demosaicking. In this paper, we propose a novel context-constrained demosaicking algorithm via sparse-representation based joint dictionary learning. Given a single mosaicked image with incomplete color samples, we perform color and texture constrained image segmentation and learn a dictionary with different context categories. A joint sparse representation is employed on different image components for predicting the missing color information in the resulting high-resolution image. During the dictionary learning and sparse coding processes, we advocate a locality constraint in our algorithm, which allows us to locate most relevant image data and thus achieve improved demosaicking performance. Experimental results show that the proposed method outperforms several existing or state-of-the-art techniques in terms of both subjective and objective evaluations.  相似文献   

11.
针对数字图像篡改一种最经常使用的复制-粘 贴篡改手段,提出了一种基于脊波变换的 图像篡改检测取证方法。算法利用了脊波变换是Radon变换切片上应用小波变换这种 特性,实现了 复制-粘贴篡改的鲁棒识别。针对十一大类图像的仿真实验结果表明,算法对于旋转变换、 JPEG压缩和噪 声添加都具有良好的鲁棒性,对于压缩新标准JPEG2000也表现出了较 好的鲁棒性。  相似文献   

12.
稀疏表示已成功应用于说话人识别领域。在稀疏表示中,构造好的字典起着重要的作用。该文将Fisher准则的结构化字典学习方法引入说话人识别系统。在判别字典的学习过程中,每一个字典对应一个类标签,因此同类别训练样本的重构误差较小。同时,保证训练样本的稀疏编码系数类内误差最小,类间误差最大。在NIST SRE 2003数据库上,实验结果表明该算法得到的等错误率是7.62%,基于余弦距离打分的i-vector的等错误率是6.7%。当两个系统融合后,得到的等错误率是5.07%。  相似文献   

13.
Recent deep learning models outperform standard lossy image compression codecs. However, applying these models on a patch-by-patch basis requires that each image patch be encoded and decoded independently. The influence from adjacent patches is therefore lost, leading to block artefacts at low bitrates. We propose the Binary Inpainting Network (BINet), an autoencoder framework which incorporates binary inpainting to reinstate interdependencies between adjacent patches, for improved patch-based compression of still images. When decoding a patch, BINet additionally uses the binarised encodings from surrounding patches to guide its reconstruction. In contrast to sequential inpainting methods where patches are decoded based on previous reconstructions, BINet operates directly on the binary codes of surrounding patches without access to the original or reconstructed image data. Encoding and decoding can therefore be performed in parallel. We demonstrate that BINet improves the compression quality of a competitive deep image codec across a range of compression levels.  相似文献   

14.
基于主成分分析和字典学习的高光谱遥感图像去噪方法   总被引:3,自引:0,他引:3  
高光谱图像变换域各波段图像噪声强度不同,并具有独特的结构。针对这些特点,该文提出一种基于主成分分析(Principal Component Analysis, PCA)和字典学习的高光谱遥感图像去噪新方法。首先,对高光谱数据进行PCA变换得到一组主成分图像;然后,对信息量较小的主成分图像分别采用基于自适应字典的稀疏表示方法和对偶树复小波变换方法去除空间维和光谱维的噪声;最后,通过PCA逆变换得出去噪后的数据。结合主成分分析和字典学习的优势,该文方法相对于传统方法对高光谱图像具有更好的自适应性,在细节得到保留的同时有效地抑制了斑块效应。对模拟和实际高光谱遥感图像的实验结果验证了该文方法的有效性。  相似文献   

15.
Sparse representation based classification (SRC) has been successfully applied in many applications. But how to determine appropriate features that can best work with SRC remains an open question. Dictionary learning (DL) has played an import role in the success of sparse representation, while SRC treats the entire training set as a structured dictionary. In addition, as a linear algorithm, SRC cannot handle the data with highly nonlinear distribution. Motivated by these concerns, in this paper, we propose a novel feature learning method (termed kernel dictionary learning based discriminant analysis, KDL-DA). The proposed algorithm aims at learning a projection matrix and a kernel dictionary simultaneously such that in the reduced space the sparse representation of the data can be easily obtained, and the reconstruction residual can be further reduced. Thus, KDL-DA can achieve better performances in the projected space. Extensive experimental results show that our method outperforms many state-of-the-art methods.  相似文献   

16.
Image fusion can integrate the complementary information of multiple images. However, when the images to be fused are damaged, the existing fusion methods cannot recover the lost information. Matrix completion, on the other hand, can be used to recover the missing information of the image. Therefore, the step-by-step operation of image fusion and completion can fuse the damaged images, but it will cause artifact propagation. In view of this, we develop a unified framework for image fusion and completion. Within this framework, we first assume that the image is superimposed by low-rank and sparse components. To obtain the separation of different components to fuse and restore them separately, we propose a low-rank and sparse dictionary learning model. Specifically, we impose low-rank and sparse constraints on low-rank dictionary and sparse component respectively to improve the discrimination of learned dictionaries and introduce the condition constraints of low-rank and sparse components to promote the separation of different components. Furthermore, we integrate the low-rank characteristic of the image into the decomposition model. Based on this design, the lost information can be recovered with the decomposition of the image without using any additional algorithm. Finally, the maximum l1-norm fusion scheme is adopted to merge the coding coefficients of different components. The proposed method can achieve image fusion and completion simultaneously in the unified framework. Experimental results show that this method can well preserve the brightness and details of images, and is superior to the compared methods according to the performance evaluation.  相似文献   

17.
Traditional quality measures for image coding, such as the peak signal-to-noise ratio, assume that the preservation of the original image is the desired goal. However, pre-processing images prior to encoding, designed to remove noise or unimportant detail, can improve the overall performance of an image coder. Objective image quality metrics obtained from the difference between the original and coded images cannot properly assess this improved performance. This paper proposes a new methodology for quality metrics that differentially weighs the changes in the image due to pre-processing and encoding. These new quality measures establish the value of pre-processing for image coding and quantitatively determine the performance improvement that can be thus achieved by JPEG and wavelet coders.  相似文献   

18.
随着成像光谱仪向着高光谱分辨率、高空间分辨率方向发展,高光谱图像的数据量呈几何级数增长。由于数据传输和存储能力的限制,必须对高光谱图像进行有效压缩。首先,对高光谱图像的相关性进行了深入分析,得知其具有一定的空间相关性和极强的谱间相关性,从而具有较强的可压缩性。其次,结合JPEG2000对DPCM进行了修改,提出了基于一阶线性预测与JPEG2000相结合的无损压缩方案。最后,在软件平台上实现了该方案,并取得了较好的压缩效果。结果表明,该方案可以有效的实现高光谱图像无损压缩,验证了方案的可行性,为硬件平台上实现该方案提供了理论依据。  相似文献   

19.
为了进一步加快JPEG2000的压缩速度,对JPEG2000压缩标准进行研究,分析得出JPEG2000核心算法离散小波变换(DWT)部分数据之间的独立性适合并行化处理。NVIDIA最新推出的CUDA(计算统一设备架构)是非常适合大规模数据并行计算的软硬件开发平台。在通用计算图形处理器(general purpose graphic process unit, GPGPU)上使用CUDA技术实现DWT并行化加速,并针对GPGPU存储空间的特点进行优化。得出的实验结果表明,经过CUDA并行优化的方法能够有效地提高DWT的计算速度。  相似文献   

20.
基于ADV212的光谱数据压缩系统研究   总被引:1,自引:0,他引:1  
为了解决现有的存储介质和传输带宽与成像光谱仪(imaging spectrometer)异常庞大的光谱数据量之间的矛盾,本论文选用FPGA搭载JPEG2000压缩专用图像压缩芯片ADV212的方式;利用Xilinx的嵌入式开发套件所提供的可编程嵌入式开发平台和Xilinx MicroBlaze软核处理器设计了硬件系统来实现光谱数据压缩。系统设计使用专用图像压缩芯片,所以不必花费大量时间对JPEG2000算法进行优化,处理数据的速度较高,还原图像的质量较好而且实现起来简单,技术成熟可靠稳定。并在基于DMD的哈达玛成像光谱仪上对所设计系统进行了验证。结果表明该系统达到实时压缩光谱数据的要求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号