共查询到20条相似文献,搜索用时 171 毫秒
1.
2.
IPTV业务是一种新兴的互联网电视业务,它提供感受良好的交互服务,容易将电视、娱乐、电子商务等多媒体业务融合在一起.随着这种业务如火如荼地发展,针对该领域的专利申请量也与日俱增.对于业界的企业和个人来说,在意欲通过专利申请来保护涉及IPTV领域的发明创造时,首先应该检索同行业其他企业和个人的专利申请状况,以保证自己独占权利益的最大化,同时避免与他人的利益冲突.而专利分类号是检索专利文献的有力工具.本文主要从国际专利分类IPC和欧洲专利分类EC入手,介绍了两种分类体系中与IPTV领域相关的分类号分布情况以及不同发明点的专利文献在分类号上如何划分,从而为相关人士提供从分类号入手检索自己感兴趣的IPTV相关专利文献的方法参考. 相似文献
3.
4.
5.
6.
在分析Freeman链码进行边缘描述的基础上,提出采用链码熵来描述链码的统计特征、采用链码空间分布熵来描述链码的空间分布特征、采用链码相关熵来描述链码间的相关性特征,并结合这三种特征进行图像检索。由于该方法在进行图像检索时既考虑了链码的统计特征又包含了其空间分布特征及相关性特征,同时上述三种描述子又具有尺度、旋转、平移不变性及链码起点无关性,因此取得了比传统方法更好的检索效果,试验结果也证明了该算法的有效性。 相似文献
7.
在分析Freeman链码进行边缘描述的基础上,提出采用链码熵来描述链码的统计特征、采用链码空间分布熵来描述链码的空间分布特征、采用链码相关熵来描述链码间的相关性特征,并结合这三种特征进行图像检索.由于该方法在进行图像检索时既考虑了链码的统计特征又包含了其空间分布特征及相关性特征,同时上述三种描述子又具有尺度、旋转、平移不变性及链码起点无关性,因此取得了比传统方法更好的检索效果,试验结果也证明了该算法的有效性. 相似文献
8.
随着互联网的广泛应用,图像数据越来越多,如何从海量图像中快速检索出感兴趣的图像成为难题。文中提出一种基于Hadoop的图像检索方法,首先提取图像SURF特征点,经K-Means聚类、PCA降维后得到图像的特征矩阵,再使用局部敏感哈希算法(LSH)得到固定长度的哈希码,并使用HBases存储图像和哈希值,检索时使用欧式距离进行相似度计算。在MirFlickr数据集进行了图像检索实验,结果表明,文中的方法可以大幅提高图像检索效率,可以满足海量图像检索的需要。 相似文献
9.
10.
11.
分形图像编码具有压缩比高、解码速度快、重构图像质量高等特点,但因这种算法在编码时定义域的搜索量十分巨大,导致其计算复杂度高、编码时间过长,阻碍了它的实用性和普遍应用.为解决此问题,文中提出一种基于四线和特征值编码算法,该算法根据匹配均方根误差与四线和特征间的关系,将全局搜索转化为局部搜索(近邻搜索),限定搜索空间,减少定义域块的搜索,从而提高编码速度.仿真实验结果表明:该算法解码图像质量在客观上优于1 -范数特征算法;与基本分形编码算法相比,基于四线和特征算法在主观上不改变重构图像质量,但在编码速度上却得到极大提高. 相似文献
12.
为了提高由图像生成文字描述的准确率,文中提出了一种基于传统的编码解码框架,分别在编码端和解码端融入视觉注意力机制的方法,即在编码端加入空间注意力机制和图像通道级注意力机制相结合的方法。在解码端运用自适应视觉注意力机制的方法,即在传统的解码端上加入一个额外的“视觉哨兵”模块。文中提出的方法在生成文字描述的过程中自动决定是依赖图像特征还是依赖语义特征,并传递给相应的注意力机制。实验证明,相比较单一的视觉注意力机制,文中方法取得了较高的图像描述语句的正确率,具有更好的图像描述性能。 相似文献
13.
《Signal Processing: Image Communication》2001,16(7):643-656
Iterated transformation theory (ITT) coding, also known as fractal coding, in its original form, allows fast decoding but suffers from long encoding times. During the encoding step, a large number of block best-matching searches have to be performed which leads to a computationally expensive process. Because of that, most of the research efforts carried on this field are focused on speeding up the encoding algorithm. Many different methods and algorithms have been proposed, from simple classifying methods to multi-dimensional nearest key search. We present in this paper a new method that significantly reduces the computational load of ITT-based image coding. Both domain and range blocks of the image are transformed into the frequency domain (which has proven to be more appropriate for ITT coding). Domain blocks are then used to train a two-dimensional Kohonen neural network (KNN) forming a codebook similar to vector quantization coding. The property of KNN (and self-organizing feature maps in general) which maintains the input space (transformed domain blocks) topology allows to perform a neighboring search to find the piecewise transformation between domain and range blocks. 相似文献
14.
快速分形图像编码的特征向量法是最具创新性、最有前途的方法之一,但它有几个缺点、特别是特征向量的高维数性.针对这个问题,本文提出减少分形编码时间的一种可选的特征方法.作为它的应用,本文先定义图像块的新特征——叉迹,然后提出一个基于叉迹的快速分形算法.这个算法把Range-Domain子块匹配问题转化为叉迹意义下的邻域搜索问题.对256×256 Lena图像的实验显示,与基于全搜索的基本分形算法比较,依赖于搜索邻域大小,该算法既能在峰值信噪比相同的情况下实现加快3倍多,也能在主观质量有一定下降的成本下实现加快100倍以上. 相似文献
15.
Tak-Shing Wong Bouman C.A. Pollak I. Zhigang Fan 《IEEE transactions on image processing》2009,18(11):2518-2535
The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR. 相似文献
16.
MIMO系统中k-best球形译码算法研究 总被引:3,自引:0,他引:3
通过对广度优先策略中有恒定复杂度的层排序k-best球译码算法进行分析,提出一种每节点保留可变扩展节点的层排序k-best球形译码算法(k-best SDA Ⅱ),在64QAM调制及每层保留8节点的实数SDA模型下,通过仿真的方法得出了保留恒定扩展节点的k-best SDA当每节点保留扩展节点数大于等于2时,性能基本不变(k-best SDA Ⅰ);而改进的k-best SDA Ⅱ则对k-best SDA Ⅰ在性能与复杂度上作了比较好的折中,前者计算复杂度大约减少了28%,而性能的损失基本可以忽略. 相似文献
17.
Better OPM/L Text Compression 总被引:1,自引:0,他引:1
An OPM/L data compression scheme suggested by Ziv and Lempel, LZ77, is applied to text compression. A slightly modified version suggested by Storer and Szymanski, LZSS, is found to achieve compression ratios as good as most existing schemes for a wide range of texts. LZSS decoding is very fast, and comparatively little memory is required for encoding and decoding. Although the time complexity of LZ77 and LZSS encoding isO(M) for a text ofM characters, straightforward implementations are very slow. The time consuming step of these algorithms is a search for the longest string match. Here a binary search tree is used to find the longest string match, and experiments show that this results in a dramatic increase in encoding speed. The binary tree algorithm can be used to speed up other OPM/L schemes, and other applications where a longest string match is required. Although the LZSS scheme imposes a limit on the length of a match, the binary tree algorithm will work without any limit. 相似文献
18.
Image compression by B-tree triangular coding 总被引:1,自引:0,他引:1
This paper describes an algorithm for still image compression called B-tree triangular coding (BTTC). The coding scheme is based on the recursive decomposition of the image domain into right-angled triangles arranged in a binary tree. The method is attractive because of its fast encoding, O(n log n), and decoding, Θ(n), where n is the number of pixels, and because it is easy to implement and to parallelize. Experimental studies indicate that BTTC produces images of satisfactory quality from a subjective and objective point of view, One advantage of BTTC over JPEG is its shorter execution time 相似文献
19.