首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
随着计算机技术的飞速发展,对软件质量的要求也更高了,软件质量度量就是衡量软件品质的一种手段。本文分析了软件质量度量模型,建立了软件质量度量框架,并给出了常用度量方法。  相似文献   

2.
随着计算机技术的飞速发展,对软件质量的要求也更高了,软件质量度量就是衡量软件品质的一种手段。本文分析了软件质量度量模型,建立了软件质量度量框架,并给出了常用度量方法。  相似文献   

3.
冯欣  杨丹  张凌 《自动化学报》2011,37(11):1322-1331
针对网络中受丢包损伤的视频提出了一种基于视觉注意力变化的全参考客观质量评估方法.该方法基于视觉显著性检测在视频数据上的应用,考察受网络丢包失真影响的视频数据与标准参考数据在空间和时间上引起的视觉注意力变化,并根据此变化相应的视觉显著性在空间和时间上的差异,提出了一组客观质量评估方法.文中采用17个受丢包损伤的视频数据进行测试,并实施了主观评价实验作为评价标准.与传统的没有考虑人眼视觉显著特性的质量评估方法,以及目前主流的基于视觉显著区域/感兴趣区域对失真像素进行加权的方法进行对比,实验结果表明, 基于视觉注意力变化的方法较后两者与主观质量评估结果有更好的相关性, 能够更有效地评估丢包损伤视频的质量.  相似文献   

4.
H.263视频编码流的时域错误掩盖   总被引:2,自引:1,他引:2       下载免费PDF全文
当H.263编码视频流在Internet上传输时,很容易受到信道错误的影响而丢失数据,由于数据的丢失不但会影响当前帧,还会连续传递到以后的解码帧,而导致图象质量的严重恶化,因此必须采用一定的措施来消除这种影响。目前较常用的错误掩盖算法是时域掩盖算法,而时域掩盖算法是利用参考帧来恢复当前帧损坏的图象数据,其计算较复杂,为此,提出了一种基于块匹配原则的时域掩盖算法,同时用三步搜索代替完全搜索来降低算法的计算复杂性,模拟结果显示,该算法由于能够在很短的处理时间内,获得较好质量的图象,因此能适应于视频会议等实时应用的要求。  相似文献   

5.
在软件工程领域,主体系统的设计和开发受到越来越多的关注。但是传统的软件度量技术和面向对象的软件度量技术不适用于对主体系统的分析,而影响主体系统质量的主要因素是复杂度和知识能力。文中结合ZEUS主体系统,采用FSM框架给出主体复杂度和知识能力的度量指标以及度量主体。  相似文献   

6.
Objective video quality assessment is of great importance in a variety of video processing applications. Most existing video quality metrics either focus primarily on capturing spatial artifacts in the video signal, or are designed to assess only grayscale video thereby ignoring important chrominance information. In this paper, on the basis of the top-down visual analysis of cognitive understanding and video features, we propose and develop a novel full-reference perceptual video assessment technique that accepts visual information inputs in the form of a quaternion consisting of contour, color and temporal information. Because of the more important role of chrominance information in the “border-to-surface” mechanism at early stages of cognitive visual processing, our new metric takes into account the chrominance information rather than the luminance information utilized in conventional video quality assessment. Our perceptual quaternion model employs singular value decomposition (SVD) and utilizes the human visual psychological features for SVD block weighting to better reflect perceptual focus and interest. Our major contributions include: a new perceptual quaternion that takes chrominance as one spatial feature, and temporal information to model motion or changes across adjacent frames; a three-level video quality measure to reflect visual psychology; and the two weighting methods based on entropy and frame correlation. Our experimental validation on the video quality experts’ group (VQEG) Phase I FR-TV test dataset demonstrated that our new assessment metric outperforms PSNR, SSIM, PVQM (P8) and has high correlation with perceived video quality.  相似文献   

7.
Defect detection and restoration of degraded videos is an important topic in media content management systems. Frame pixel-shift is a common form of severe defect in videos caused by loss of consecutive pixels by the video transmission system. Pixel-shift refers to the large amount of pixel shifts one by one due to a small quantity of image data loss. The damaged region in the affected frame is usually quite large, causing serious degradation of visual quality. This paper addresses the issue of how to automatically detect and restore frame pixel-shift in videos. Pixel-shift frame detection relies on spatio-temporal information and motion estimation. Accurate measurement of pixels shift is achieved based on the analysis of temporal frequency information and restoration is accomplished by reversing the pixels shift and spatio-temporal interpolation. Performance evaluation using real video sequences demonstrate the good performance of our algorithm.  相似文献   

8.
Mobile video quality assessment plays an essential role in multimedia systems and services. In the case of scalable video coding, which enables dynamic adaptation based on terminal capabilities and heterogeneous network, variable resolution is one of the most prominent types of video distortions. In this paper, we propose a new hybrid spatial and temporal distortion metric for evaluating video streaming quality with variable spatio-temporal resolution. The key idea is to project video sequence into feature domain and calculate the distortion of content information from the projected principal component matrix and its eigenvectors. This metric can measures the degree of content information degradation especially in spatio-temporal resolution scalable video. The performance of the proposed metric is evaluated and compared to some state-of-the-art quality evaluation metrics in the literature. Our results show that the proposed metric achieves good correlations with the subjective evaluations of the EPFL scale video database.  相似文献   

9.
10.
Error Concealment for Frame Losses in MDC   总被引:1,自引:0,他引:1  
  相似文献   

11.
Effective quality-of-service (QoS) metrics must relate to end-user experience. For multimedia services these metrics should focus on phenomena that are observable by the end user. Once a congestion event occurs in the network it tends to persist, resulting in long bursts of consecutive packet loss. Such an event is observable to the network customer. There is a need to increase our understanding of the temporal characteristics of congestion. It has become increasingly apparent that the temporal characteristics of congestion events have the dominant effect on user-perceived QoS. A rigorous definition of the time between congestion events is given here, as well as an associated prediction methodology. The inter-congestion event time or the rate of congestion events per unit time provides a network quality metric that is easily understandable to network users and is conveniently predicted and measured. The contribution of this paper is the definition of a metric to characterize congestion events and development of an analytic methodology to predict the expected number of congestion events per unit time. The proposed methodology is evaluated for a variety of traffic models.  相似文献   

12.
This paper presents the results of a study which investigated the impact of cognitive styles on perceptual multimedia quality. More specifically, we examine the different preferences demonstrated by verbalizers and imagers when viewing multimedia content presented with different quality of service (QoS) levels pertaining to frame rates and color depth. Recognizing multimedia’s infotainment duality, we used the quality of perception (QoP) metric to characterize perceived quality. Results showed that in terms of low and high dynamisms clips, the frame rate at which multimedia content is displayed influences the levels of information assimilated by Imagers. Whilst black and white presentations are shown to be beneficial for both Biomodals and Imagers in order to experience enhanced levels of information assimilation, Imagers were shown to enjoy presentations in full 24-bit colour.  相似文献   

13.

Video compression makes the encoded video stream more vulnerable to the channel errors so that, the quality of the received video is exposed to severe degradation when the compressed video is transmitted over the error-prone environments. Therefore, it is necessary to apply error concealment (EC) techniques in the decoder to improve the quality of the received video. In this regard, an Adaptive Content-based EC Approach (ACBECA) is proposed in this paper, which exploits both the spatial and temporal correlations within the video sequences for the EC purpose. The proposed approach adaptively utilizes two EC techniques, including new spatial-temporal error concealment (STEC) technique, and a temporal error concealment (TEC) technique, to recover the lost regions of the frame. The STEC technique proposed in this paper is established on the basis of non-Local Means concept and tries to recover each lost macroblock (MB) as the weighted average of the similar MBs in the reference frame, whereas the TEC technique recovers the motion vector of the lost MB adaptively by analyzing the behavior of the MB in the frame. The decision on temporally or spatially reconstructing the degraded frames is made dynamically according to the content of the degraded frame (i.e., structure or texture), type of the error and also block loss rate (BLR). Compared with the state-of-the-art EC techniques, the simulation results indicate the superiority of the ACBECA in terms of both the objective and subjective quality assessments.

  相似文献   

14.
Eye tracking methods are usually focused on obtaining the highest spatial precision as possible, locating the centre of the pupil and the point of gaze for a series of frames. However, for the analysis of eye movements such as saccades or fixations, the temporal precision needs to be optimised as well. The results should not only be precise, but also stable. Eye tracking using low-cost hardware such as webcams brings a new series of challenges that have to be specifically taken into account. Noise, low resolution and low frame rates are some of these challenges, which in the end are the cause of temporal instabilities that negatively affect the results. This paper proposes a measure for temporal stability of pupil detection algorithms, applied on video streams obtained from webcams. The aim of this metric is to compare and evaluate the temporal stability of different algorithms (following a multi-layered approach for pupil detection), in order to identify which one is more adequate to its use for movement detection using low-cost hardware. The obtained results show how the temporal stability of different algorithms is affected by several factors.  相似文献   

15.

In real-time rendering, a 3D scene is modelled with meshes of triangles that the GPU projects to the screen. They are discretized by sampling each triangle at regular space intervals to generate fragments which are then added texture and lighting effects by a shader program. Realistic scenes require detailed geometric models, complex shaders, high-resolution displays and high screen refreshing rates, which all come at a great compute time and energy cost. This cost is often dominated by the fragment shader, which runs for each sampled fragment. Conventional GPUs sample the triangles once per pixel; however, there are many screen regions containing low variation that produce identical fragments and could be sampled at lower than pixel-rate with no loss in quality. Additionally, as temporal frame coherence makes consecutive frames very similar, such variations are usually maintained from frame to frame. This work proposes Dynamic Sampling Rate (DSR), a novel hardware mechanism to reduce redundancy and improve the energy efficiency in graphics applications. DSR analyzes the spatial frequencies of the scene once it has been rendered. Then, it leverages the temporal coherence in consecutive frames to decide, for each region of the screen, the lowest sampling rate to employ in the next frame that maintains image quality. We evaluate the performance of a state-of-the-art mobile GPU architecture extended with DSR for a wide variety of applications. Experimental results show that DSR is able to remove most of the redundancy inherent in the color computations at fragment granularity, which brings average speedups of 1.68x and energy savings of 40%.

  相似文献   

16.
This paper deals with monitoring user perception of multimedia presentations in a Universal Multimedia Access (UMA) enabled system using objective no-reference (NR) metrics. These NR metrics are designed for an UMA-enabled system, in a novel architecture, for a multimedia viewer. The first metric measures block-edge impairments in a video frame at the receiver end, based on the observation that they occur in regions with low spatial activity. The second metric evaluates the quality of the reconstructed video frame in the event of packet loss. Here, the structure of the artifact is itself exploited for the evaluation. Both the metrics involve low computational complexity and are feasible for real-time monitoring of streaming video in a multimedia communication scenario. Further, in rate-adaptive streaming of video, these metrics could serve as feedback parameters to dynamically adapt the bit rates based on network congestion.
Odd Inge HillestadEmail:
  相似文献   

17.
Packet loss is of great importance as a metric that characterizes the network’s performance, and is crucial for video applications, congestion control and routing. Most of existing measurement tools can indicate the packet loss of network links instead of the actual packet loss of individual application. On the other hand, because occurrence of packet loss behavior is relatively rare and its duration is short, active measuring methods need to inject a large number of packets and run for a long time for reporting accurate estimates, which would introduce additional intrusiveness to the network and perturb user traffic. In this paper, we present a new packet loss estimation technique by making use of user_data field of video, which is less intrusive since it does not affect video playing and does not need to inject extra probing stream. It can also provide the packet loss detailed information of I,P,B frames. The accuracy of the algorithm has been evaluated with both simulations and experiments over real-world Internet paths. In addition, we analyze the video quality distortion caused by packet loss of different frame types, and a real-time video quality monitoring system is built.  相似文献   

18.
视频压缩码流在信道传输时 ,由于受到信道带宽或者稳定性的影响 ,容易发生数据的损坏或者丢失 ,这样不仅会对当前的视频帧产生影响 ,而且差错会延续到随后的视频帧 ,因此 ,需要采用某种技术来降低差错的影响。针对这一问题 ,在对最新视频压缩标准 H.2 6 4研究的基础上 ,基于 H.2 6 4标准的框架 ,对已有的差错掩盖算法进行了改进 ,提出了适合 H.2 6 4编码标准的时域子块匹配差错掩盖算法。该算法首先采用 8× 8的子块代替 16× 16的宏块 ,作为差错掩盖的运算单元 ,然后对不同的子块采用不同的边界像素 ,利用边界匹配算法 ,并通过改进的 1/ 4像素精度菱形搜索法在参考帧内找到最佳匹配块。实验结果证明 ,由于该算法有效地利用了 H.2 6 4压缩码流里的信息 ,因此 ,同传统的时域差错掩盖算法相比 ,对差错信号有更好的恢复效果。  相似文献   

19.
Environment sampling is a popular technique for rendering scenes with distant environment illumination. However, the temporal consistency of animations synthesized under dynamic environment sequences has not been fully studied. This paper addresses this problem and proposes a novel method, namely spatiotemporal sampling, to fully exploit both the temporal and spatial coherence of environment sequences. Our method treats an environment sequence as a spatiotemporal volume and samples the sequence by stratifying the volume adaptively. For this purpose, we first present a new metric to measure the importance of each stratified volume. A stratification algorithm is then proposed to adaptively suppress the abrupt temporal and spatial changes in the generated sampling patterns. The proposed method is able to automatically adjust the number of samples for each environment frame and produce temporally coherent sampling patterns. Comparative experiments demonstrate the capability of our method to produce smooth and consistent animations under dynamic environment sequences.  相似文献   

20.
We consider the time-dependent demands for data movement that a parallel program makes on the architecture that executes it. The result is an architecture-independent metric that represents the temporal behavior of data-movement requirements. Programs are described as series of computations and data movements, and while message passing is not ruled out, we focus on explicit parallel programs using a fixed number of processes in a distributed shared-memory environment. Operations are assumed to be explicitly allocated to processors when the metric is applied, which might correspond to intermediate code in a parallelizing compiler. The metric is called the interprocess read (IR) temporal metric. A key to developing an architecture-independent temporal metric is modeling program execution time in an architecture-independent way. This is possible because well-synchronized parallel programs make coordinated progress above a certain level of granularity. Our execution time characterization takes into account barrier synchronization and critical sections. We illustrate the metric using instruction count on simple code fragments and then from multiprocessor program traces (Splash benchmarks). Results of running the benchmarks on simulated network architectures show that the IR metric for the time scale of network response predicts performance better than whole program measures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号