首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Image quality assessment (IQA) is of great importance to numerous image processing applications, and various methods have been proposed for it. In this paper, a Multi-Level Similarity (MLSIM) index for full reference IQA is proposed. The proposed metric is based on the fact that human visual system (HVS) distinguishes the quality of an image mainly according to the details given by low-level gradient information. In the proposed metric, the Prewitt operator is first utilized to get gradient information of both reference and distorted images, then the gradient information of reference image is segmented into three levels (3LSIM) or two levels (2LSIM), and the gradient information of distorted image is segmented by the corresponding regions of reference image, therefore we get multi-level information of these two images. Riesz transform is utilized to get corresponding features of different levels and the corresponding 1st-order and 2nd-order coefficients are combined together by regional mutual information (RMI) and weighted to obtain a single quality score. Experimental results demonstrate that the proposed metric is highly consistent with human subjective evaluations and achieves good performance.  相似文献   

2.
互补色小波域图像质量盲评价方法   总被引:2,自引:0,他引:2       下载免费PDF全文
陈扬  李旦  张建秋 《电子学报》2019,47(4):775-783
图像色彩空间的RGB通道具有密切的关系,图像质量的改变会改变这样的关系.然而传统图像质量评价方法大多基于灰度图像统计特性,忽略了颜色通道间关系信息.为充分利用颜色信息,本文基于新近提出的互补色小波变换提出一种图像质量盲评价方法.文章建立了图像互补色域自然场景统计、多尺度和方向性能量分布等模型.分析表明:这些模型不仅涵盖了传统灰度方法所能描述的信息,而且还能借助于互补色来有效表示彩色图像各通道之间的信息联系,提供表征图像质量的一组高效特征.基于这些特征,我们提出的图像质量盲评价的方法能有效提取图像的失真统计特征,能给出与人眼主观评价图像质量结果保持高度一致、优于现有文献报道盲方法、且可与非盲(全参考)方法相比拟的评价结果.  相似文献   

3.
传统的图像质量评价算法大多针对灰度图像而言,这些算法利用图像中对应的像素灰度误差建立数学模型来评价图像,不能够评价彩色图像的质量。本文提出了一种基于边缘特征和颜色亮度信息的彩色图像质量评价方法。首先,利用Sobel算子提取图像边缘;其次,定义了边缘和亮度相似系数来分析它们对图像的影响。最后,综合考虑这两个因素建立数学模型对彩色图像进行评价。大量实验表明:该方法适合评价彩色图像;同时,该实验结果与人的主观感觉的一致性较好。  相似文献   

4.
黄虹  张建秋 《电子学报》2014,42(7):1419-1423
本文提出了一个图像质量盲评估的统计测度.该测度首先根据自然图像的统计性质与失真图像的模型,实现对图像小波系数分布参数的盲估计;再利用估计的分布参数来计算失真图像与参考图像之间的互信息,以量化失真图像对参考图像的保真度,进而实现对图像质量的评估.本文提出的测度避免了对参考图像的依赖,且克服了现有图像质量盲评估对特征选择与提取、机器学习等过程的依赖.LIVE图像质量评估数据库的总体评估结果表明:本文提出的盲评估统计测度对图像质量评估结果与数据库的主观评估结果高度一致,且优于文献中报道的盲评估测度.  相似文献   

5.
介绍了PCNN模型原理,提出了基于双通道自适应的PCNN多光谱与全色图像融合算法。该算法首先将RGB空间的多光谱图像转换为HSV彩色空间,然后将HSV彩色空间中的非彩色通道(V通道)的灰度像素值和全色图像的像素灰度值分别作为PCNN-1及PCNN-2的神经元输入,利用方向性信息作为自适应链接强度系数,对非彩色通道图像和全色图像进行自适应分解,再将点火时间序列送入判决因子得到新的非彩色通道图像,最后将原多光谱图像的H通道分量、S通道分量及新的V通道分量经HSV空间逆变换获得最终的融合图像。实验结果表明,该算法不仅解决了链接强度系数自动设置的问题,而且充分考虑到图像边缘和方向特征的影响,无论在主观视觉效果,还是客观评价标准上均优于IHS、PCA、小波融合等其他图像融合算法,同时降低了计算复杂度。  相似文献   

6.
采用区域互信息的多光谱与全色图像融合算法   总被引:1,自引:0,他引:1       下载免费PDF全文
为了提高多光谱与全色图像融合算法质量,提出了一种采用区域互信息的多光谱与全色图像融合算法。首先将多光谱图像变换至HSV彩色空间,并采用分水岭与区域合并的方法对V分量进行区域分割,得到区域分割映射,欧氏光谱距离作为区域合并的测度。然后采用非下采样Contourlet变换(Nonsubsample Contourlet Transform,NSCT)对多光谱图像V分量和全色图像进行多分辨率分解,将区域分割结果映射至全色图像,通过计算对应区域间的互信息对多分辨率分解系数进行融合,获得融合图像的分解系数,最后通过NSCT反变换实现融合图像重构。图像融合算法对比实验表明,文中融合算法在充分保留了多光谱图像光谱信息的同时,尽可能多地注入了全色图像的细节信息,有效提高了多光谱图像的边缘特征。  相似文献   

7.
8.
图像质量评价研究的目标在于模拟人类视觉系统对图像质量的感知过程,构建与主观评价结果尽可能一致的客观评价算法。现有的很多算法都是基于局部结构相似设计的,但人对图像的主观感知是高级的、语义的过程,而语义信息本质上是非局部的,因此图像质量评价应该考虑图像的非局部信息。该文突破了经典的基于局部信息的算法框架,提出一种基于非局部信息的框架,并在此框架内构建了一种基于非局部梯度的图像质量评价算法,该算法通过度量参考图像与失真图像的非局部梯度之间的相似性来预测图像质量。在公开测试数据库TID2008, LIVE, CSIQ上的数值实验结果表明,该算法能获得较好的评价效果。  相似文献   

9.
Stereoscopic imaging is widely used in many fields. In many scenarios, stereo images quality could be affected by various degradations, such as asymmetric distortion. Accordingly, to guarantee the best quality of experience, robust and accurate reference-less metrics are required for quality assessment of stereoscopic content. Most existing stereo no-reference Image Quality Assessment (IQA) models are not consistent with asymmetrical distortions. This paper presents a new no-reference stereoscopic image quality assessment metric using a human visual system (HVS) modeling and an advanced machine-learning algorithm. The proposed approach consists of two stages. In the first stage, cyclopean image is constructed considering the presence of binocular rivalry in order to cover the asymmetrically distorted part. In the second stage, gradient magnitude, relative gradient magnitude, and gradient orientation are extracted. These are used as a predictive source of information for the quality. In order to obtain the best overall performance against different databases, Adaptive Boosting (AdaBoost) idea of machine learning combined with artificial neural network model has been adopted. The benchmark LIVE 3D phase-I, phase-II, and IRCCyN/IVC 3D databases have been used to evaluate the performance of the proposed approach. Experimental results have demonstrated that the proposed metric performance achieves high consistency with subjective assessment and outperforms the blind stereo IQA over various types of distortion.  相似文献   

10.
Image quality assessment (IQA) attempts to quantify the quality-aware visual attributes perceived by humans. They can be divided into subjective and objective image quality assessment. Subjective IQA algorithms rely on human judgment of image quality, where the human visual perception functions as the dominant factor However, they cannot be widely applied in practice due to the heavy reliance on different individuals. Motivated by the fact that objective IQA largely depends on image structural information, we propose a structural cues-based full-reference IPTV IQA algorithm. More specifically, we first design a grid-based object detection module to extract multiple structural information from both the reference IPTV image (i.e., video frame) and the test one. Afterwards, we propose a structure-preserved deep neural networks to generate the deep representation for each IPTV image. Subsequently, a new distance metric is proposed to measure the similarity between the reference image and the evaluated image. A test IPV image with a small calculated distance is considered as a high quality one. Comprehensive comparative study with the state-of-the-art IQA algorithms have shown that our method is accurate and robust.  相似文献   

11.
12.
No-reference image quality assessment is of great importance to numerous image processing applications, and various methods have been widely studied with promising results. These methods exploit handcrafted features in the transformation or space domain that are discriminated for image degradations. However, abundant a priori knowledge is required to extract these handcrafted features. The convolutional neural network (CNN) is recently introduced into the no-reference image quality assessment, which integrates feature learning and regression into one optimization process. Therefore, the network structure generates an effective model for estimating image quality. However, the image quality score obtained by the CNN is based on the mean of all of the image patch scores without considering the human visual system, such as edges and contour of images. In this paper, we combine the CNN and the Prewitt magnitude of segmented images and obtain the image quality score using the mean of all the products of the image patch scores and weights based on the result of segmented images. Experimental results on various image distortion types demonstrate that the proposed algorithm achieves good performance.  相似文献   

13.
针对低照度环境下具有动态性的彩色图像辨识度相对较差的问题,提出了基于人类视觉感知模型的对比度分辨率补偿算法。首先,将图像从RGB空间转换到HSV空间,保持H分量不变;对V分量提取图像特征参数,然后对V分量进行对比度分辨率补偿,增强图像亮度;并对S分量进行线性拉伸,复原图像色彩信息;最后将处理后的V分量和S分量与H分量进行反变换生成RGB空间新图像。实验结果表明,该算法对低照度下具有动态性的彩色图像增强效果较好,较好地保持了图像的细节,取得了良好的视觉效果。  相似文献   

14.
We develop an efficient general-purpose no-reference (NR) image quality assessment (IQA) model that utilizes local spatial and spectral entropy features on distorted images. Using a 2-stage framework of distortion classification followed by quality assessment, we utilize a support vector machine (SVM) to train an image distortion and quality prediction engine. The resulting algorithm, dubbed Spatial–Spectral Entropy-based Quality (SSEQ) index, is capable of assessing the quality of a distorted image across multiple distortion categories. We explain the entropy features used and their relevance to perception and thoroughly evaluate the algorithm on the LIVE IQA database. We find that SSEQ matches well with human subjective opinions of image quality, and is statistically superior to the full-reference (FR) IQA algorithm SSIM and several top-performing NR IQA methods: BIQI, DIIVINE, and BLIINDS-II. SSEQ has a considerably low complexity. We also tested SSEQ on the TID2008 database to ascertain whether it has performance that is database independent.  相似文献   

15.
为建立通用、客观的融合图像质量评价方法,在分析图像质量评价与融合图像质量评价关系基础上,给出了图像质量评价与融合图像质量评价的一般表达式。依据信息理论和结构相似度评价方法,对建立的4种客观评价指标,采用4种融合方法获得的36幅融合图像进行了主观评价实验,统计分析结果显示,结合人类视觉系统的客观评价方法优于熵、交互信息量等评价指标,但仍未达到高度的主客观一致性,说明构建通用、高效、主客观一致性好的融合图像质量评价指标存在较大难度,同时对可能存在的原因进行了分析。  相似文献   

16.
针对自选餐厅结账中人工计价的效率问题,文中提出了一种基于颜色特征的食物类别识别算法。该算法通过边缘投影提取目标区域,再基于Lab颜色模型对食物图像聚类分割,利用HSV颜色模型获取各类子区域的颜色特征,并基于区域颜色识别食物的种类。分别针对1类和3类食物的各30幅图像进行了仿真实验和统计分析。结果表明,算法识别准确率可达95.6%,识别速度最快只需0.119 s。  相似文献   

17.
This paper proposes a new image quality assessment framework which is based on color perceptual model. By analyzing the shortages of the existing image quality assessment methods and combining the color perceptual model, the general framework of color image quality assessment based on the S-CIELAB color space is presented. The S-CIELAB color model, a spatial extension of CIELAB, has an excellent performance for mimicking the perceptual processing of human color vision. This paper incorporates excellent color perceptual characteristics model with the geometrical distortion measurement to assess the image quality. First, the reference and distorted images are transformed into S-CIELAB color perceptual space, and the transformed images are evaluated by existing metric in three color perceptual channels. The fidelity factors of three channels are weighted to obtain the image quality. Experimental results achieved on LIVE database II shows that the proposed methods are in good consistency with human subjective assessment results.  相似文献   

18.
To effective handle image quality assessment (IQA) where the images might be with sophisticated characteristics, we proposed a deep clustering-based ensemble approach for image quality assessment toward diverse images. Our approach is based on a convolutional DAE-aware deep architecture. By leveraging a layer-by-layer pre-training, our proposed deep feature clustering architecture extracted a fixed number of high-level features at first. Then, it optimally splits image samples into different clusters by using the fuzzy C-means algorithm based on the engineered deep features. For each cluster, we simulated a particular fitting function of differential mean opinion scores with each assessed image’s PSNR, SIMM, and VIF scores. Comprehensive experimental results on TID2008, TID2013 and LIVE databases have demonstrated that compared to the state-of-the-art counterparts, our proposed IQA method can reflect the subjective quality of images more accurately by seamlessly integrating the advantages of three existed IQA methods.  相似文献   

19.
杨媛  高勇  房继军  乔世杰  韩超 《电子学报》2012,40(8):1655-1658
由于视频图像多样化,目前尚无较好的增强方法适应视频画质增强.本文提出了一种改进的数字视频画质增强算法,并进行了硬件电路的设计.与传统的基于直方图均衡的方法不同,首先在YUV色彩空间对输入图像的信息进行判断分类和对比度调整,然后对调整后的图像在RGB色彩空间下进行动态范围调整,并在HSV色彩空间下进行必要的亮度修正和色饱补偿.采用Verilog语言进行了算法的各模块电路设计,并在搭建的FPGA视频验证平台上进行了验证.实验结果表明,论文提出的画质增强算法能够适应各种不同场景的图像,处理后的图像明亮清晰、色彩逼真.  相似文献   

20.
In order to improve the visibility and contrast of low-light images and better preserve the edge and details of images, a new low-light color image enhancement algorithm is proposed in this paper. The steps of the proposed algorithm are described as follows. First, the image is converted from the red, green and blue (RGB) color space to the hue , saturation and value (HSV) color space, and the histogram equalization (HE) is performed on the value component. Next, non-subsampled shearlet transform (NSST) is used on the value component to decompose the image into a low frequency sub-band and several high frequency sub-bands. Then, the low frequency sub-band and high frequency sub-bands are enhanced respectively by Gamma correction and improved guided image filtering (IGIF), and the enhancedvalue component is formed by inverse NSST transform. Finally, the image is converted back to the RGB color space to obtain the enhanced image. Experimental results show that the proposed method not only significantly improves the visibility and contrast, but also better preserves the edge and details of images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号