首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we propose a simple and flexible spell checker using efficient associative matching in the AURA modular neural system. Our approach aims to provide a pre-processor for an information retrieval (IR) system allowing the user's query to be checked against a lexicon and any spelling errors corrected, to prevent wasted searching. IR searching is computationally intensive so much so that if we can prevent futile searches we can minimise computational cost. We evaluate our approach against several commonly used spell checking techniques for memory-use, retrieval speed and recall accuracy. The proposed methodology has low memory use, high speed for word presence checking, reasonable speed for spell checking and a high recall rate.  相似文献   

2.
基于多重索引模型的大规模词典近似匹配算法   总被引:1,自引:0,他引:1  
编辑器的拼写校正、搜索引擎的查询纠正、光学字符识别的结果检查等领域都用到词典近似匹配算法.传统单索引模式很难在高性能的前提下保证高召回率.词典越大问题越严重.提出了大规模词典近似匹配的多重索引模型,首先将背景词典根据单词长度划分为若干子词典,对各子词典按照一定策略建立unigram,bigram,trigram,quadgram中的一种或若干种索引,当查找用户模式P的近似匹配时,根据模式P检索特定N-gram索引链,从而得到候选近似匹配集合C,对C中每一个单词W,计算P与W的编辑距离即可输出P的所有最终匹配结果R.实验表明,基于多重索引模型的词典近似匹配算法能够大幅度减少候选近似匹配结果的数量,从而提高词典近似匹配的速度.  相似文献   

3.
Spelling speech recognition can be applied for several purposes including enhancement of speech recognition systems and implementation of name retrieval systems. This paper presents a Thai spelling analysis to develop a Thai spelling speech recognizer. The Thai phonetic characteristics, alphabet system and spelling methods have been analyzed. As a training resource, two alternative corpora, a small spelling speech corpus and an existing large continuous speech corpus, are used to train hidden Markov models (HMMs). Then their recognition results are compared to each other. To solve the problem of utterance speed difference between spelling utterances and continuous speech utterances, the adjustment of utterance speed has been taken into account. Two alternative language models, bigram and trigram, are used for investigating performance of spelling speech recognition. Our approach achieves up to 98.0% letter correction rate, 97.9% letter accuracy and 82.8% utterance correction rate when the language model is trained based on trigram and the acoustic model is trained from the small spelling speech corpus with eight Gaussian mixtures.  相似文献   

4.
In this paper, we propose a simple, flexible, and efficient hybrid spell checking methodology based upon phonetic matching, supervised learning, and associative matching in the AURA neural system. We integrate Hamming Distance and n-gram algorithms that have high recall for typing errors and a phonetic spell-checking algorithm in a single novel architecture. Our approach is suitable for any spell checking application though aimed toward isolated word error correction, particularly spell checking user queries in a search engine. We use a novel scoring scheme to integrate the retrieved words from each spelling approach and calculate an overall score for each matched word. From the overall scores, we can rank the possible matches. We evaluate our approach against several benchmark spellchecking algorithms for recall accuracy. Our proposed hybrid methodology has the highest recall rate of the techniques evaluated. The method has a high recall rate and low-computational cost.  相似文献   

5.
《国际计算机数学杂志》2012,89(3-4):149-153
The Aho-Corasick algorithm is a well-known method of determining the occurrences of one of several given pattern strings in a given text string. We address the question of augmenting the pattern matching machine constructed by this algorithm with a new pattern string, both on-line and off-line. We show that augmenting a machine of N nodes with a new pattern string of length m takes Θ(mN) time on-line and Θ(N) time off-line.  相似文献   

6.
张扬  何丕廉  向伟  李沐 《软件学报》2008,19(3):557-564
提出一种基于判别模型的拼写校正方法.它针对已有拼写校正系统Aspell的输出进行重排序,使用判别模型Ranking SVM来改进其性能.将现今较为成熟的拼写校正技术(包括编辑距离、基于字母的n元语法、发音相似度和噪音信道模型)以特征的形式整合到该模型中来,显著地提高了基准系统Aspell的初始排序质量,同时性能也超过了一些商用系统(如Microsoft Word 2003)的拼写校正模块.此外,还提出了一种在搜索引擎查询日志链中自动抽取拼写校正训练对的方法.基于这种方法训练的模型获得了基于人工标注数据所得结果相近的性能,它们分别将基准系统的错误率降低了32.2%和32.6%.  相似文献   

7.
Approximate pattern matching algorithms have become an important tool in computer assisted music analysis and information retrieval. The number of different problem formulations has greatly expanded in recent years, not least because of the subjective nature of measuring musical similarity. From an algorithmic perspective, the complexity of each problem depends crucially on the exact definition of the difference between two strings. We present an overview of advances in approximate string matching in this field focusing on new measures of approximation.  相似文献   

8.
A Two-Stage Framework for Polygon Retrieval   总被引:4,自引:0,他引:4  
  相似文献   

9.
基于最小编辑距离的维语词语检错与纠错研究   总被引:3,自引:1,他引:2  
拼写错误的发现和候选词选取是文本分析中的一个重要的技术问题。本文结合维吾尔语的语音和词语结构特点,列出了文本中常见的拼写错误类型,详细分析了解决方法,利用最小编辑距离(minimum edit distance)算法实现了维吾尔语文本拼写错误分析中的查错和纠错功能,并以此为基础,结合维吾尔语构词规则,进一步提高了建议候选词的准确率和速度。该算法已被成功地应用到了维吾尔语文字自动校对和多文种文本检索等领域中。在以新疆高校学报为语料的测试中,词语查纠率达到 85%以上。  相似文献   

10.
字符串匹配技术研究   总被引:17,自引:0,他引:17  
简述了字符串匹配算法的研究进展,分析了Knuth—Morris-Pratt算法、Boycr—Moore算法以及Horspool、Wu&Manber和Aho—Corasick针对Boyer—Moore算法提出的多种改进算法,并基于网络安全应用中开放源码的NIDS系统——Snort2.0,对其中几个算法进行评测,指出了实际应用中字符串匹配技术的关键点和解决办法,探讨了应用字符串匹配技术的NIDS的研发方向。  相似文献   

11.
An approach to designing very fast algorithms for approximate string matching in a dictionary is proposed. Multiple spelling errors corresponding to insert, delete, change, and transpose operations on character strings are considered in the fault model. The design of very fast approximate string matching algorithms through a four-step reduction procedure is described. The final and most effective step uses hashing techniques to avoid comparing the given word with words at large distances. The technique has been applied to a library book catalog textbase. The experiments show that performing approximate string matching for a large dictionary in real-time on an ordinary sequential computer under our multiple fault model is feasible  相似文献   

12.
In this paper we describe a tagger-lemmatizer for fourteenth century Dutch charters (as found in the corpus van Reenen/Mulder), with a special focus on the treatment of the extensive orthographic variation in this material. We show that despite the difficulties caused by the variation, we are still able to reach about 95 % accuracy in a tenfold cross-validation experiment for both tagging and lemmatization. We can deal effectively with the variation in tokenization (as applied by the authors) by pre-normalization (retokenization). For variation in spelling, however, we choose to expand our lexicon with predicted spelling variants. For those forms which can also not be found in this expanded lexicon, we first derive the word class and subsequently search for the most similar lexicon word. Interestingly, our techniques for recognizing spelling variants turn out to be vital for lemmatization accuracy, but much less important for tagging accuracy.  相似文献   

13.
一种基于链码的三维心血管图像匹配算法   总被引:3,自引:0,他引:3       下载免费PDF全文
为了快速准确地进行三维心血管图像匹配,以帮助医生更加准确地进行心血管疾病的治疗,提出一种基于链码理论的三维心血管图像心血管中轴线的匹配方法,即首先将二维的Freeman编码拓展至三维空间,然后将其用于对已获取的三维心血管进行编码,以便于实现对不同时刻的三维心血管图像心血管中轴线的匹配。另外,还对模式识别中链码的串匹配算法作了一个简要介绍,并讨论了其中的编码、代价函数、归一化的链间距离等难点。为了验证该算法的效果,还选择了两种构造替换代价函数的方法对三维心血管进行了实验,并利用标准公式对实验结果进行了评估。实验结果表明,利用两种代价函数都可以实现图像的匹配,但是匹配的程度有较大差异,其中利用第2种代价函数可以得到更加令人满意的匹配结果。  相似文献   

14.
近似字符串匹配是模式匹配研究领域中的一个重要研究方向。压缩后缀数组是字符串匹配、数据压缩等领域广泛使用的索引结构,具有检索速度快和适用广泛的优点。利用压缩后缀数组,提出了适合近似字符串匹配搜索算法的数据结构,并在此基础上提出了一种匹配搜索算法。实验结果表明,相对于现有的算法,提出的算法在小字母表的情况下具有计算优势。  相似文献   

15.
CLP2010(CIPS-SIGHAN Joint Conference on Chinese Language Processing)的人名消歧评测的任务是个 聚类问题:对给定的一组文档,按照文档中出现的指定查询词所指向的人进行聚类.由于是用"字"串匹配的方法从新华社的语料库中抽出所有含有该查询词的文档.所以对于这个...  相似文献   

16.
This paper studies the problem of matching two unsynchronized video sequences of the same dynamic scene, recorded by different stationary uncalibrated video cameras. The matching is done both in time and in space, where the spatial matching can be modeled by a homography (for 2D scenarios) or by a fundamental matrix (for 3D scenarios). Our approach is based on matching space-time trajectories of moving objects, in contrast to matching interest points (e.g., corners), as done in regular feature-based image-to-image matching techniques. The sequences are matched in space and time by enforcing consistent matching of all points along corresponding space-time trajectories. By exploiting the dynamic properties of these space-time trajectories, we obtain sub-frame temporal correspondence (synchronization) between the two video sequences. Furthermore, using trajectories rather than feature-points significantly reduces the combinatorial complexity of the spatial point-matching problem when the search space is large. This benefit allows for matching information across sensors in situations which are extremely difficult when only image-to-image matching is used, including: (a) matching under large scale (zoom) differences, (b) very wide base-line matching, and (c) matching across different sensing modalities (e.g., IR and visible-light cameras). We show examples of recovering homographies and fundamental matrices under such conditions.  相似文献   

17.
M. Farach  M. Thorup 《Algorithmica》1998,20(4):388-404
String matching and compression are two widely studied areas of computer science. The theory of string matching has a long association with compression algorithms. Data structures from string matching can be used to derive fast implementations of many important compression schemes, most notably the Lempel—Ziv (LZ77) algorithm. Intuitively, once a string has been compressed—and therefore its repetitive nature has been elucidated—one might be tempted to exploit this knowledge to speed up string matching. The Compressed Matching Problem is that of performing string matching in a compressed text, without uncompressing it. More formally, let T be a text, let Z be the compressed string representing T , and let P be a pattern. The Compressed Matching Problem is that of deciding if P occurs in T , given only P and Z . Compressed matching algorithms have been given for several compression schemes such as LZW. In this paper we give the first nontrivial compressed matching algorithm for the classic adaptive compression scheme, the LZ77 algorithm. In practice, the LZ77 algorithm is known to compress more than other dictionary compression schemes, such as LZ78 and LZW, though for strings with constant per bit entropy, all these schemes compress optimally in the limit. However, for strings with o(1) per bit entropy, while it was recently shown that the LZ77 gives compression to within a constant factor of optimal, schemes such as LZ78 and LZW may deviate from optimality by an exponential factor. Asymptotically, compressed matching is only relevant if |Z|=o(|T|) , i.e., if the compression ratio |T|/|Z| is more than a constant. These results show that LZ77 is the appropriate compression method in such settings. We present an LZ77 compressed matching algorithm which runs in time O(n log 2 u/n + p) where n=|Z| , u=|T| , and p=|P| . Compare with the na?ve ``decompresion' algorithm, which takes time Θ(u+p) to decide if P occurs in T . Writing u+p as (n u)/n+p , we see that we have improved the complexity, replacing the compression factor u/n by a factor log 2 u/n . Our algorithm is competitive in the sense that O(n log 2 u/n + p)=O(u+p) , and opportunistic in the sense that O(n log 2 u/n + p)=o(u+p) if n=o(u) and p=o(u) . Received December 20, 1995; revised October 29, 1996, and February 6, 1997.  相似文献   

18.
Approximate string matching problem is a common and often repeated task in information retrieval and bioinformatics. This paper proposes a generic design of a programmable array processor architecture for a wide variety of approximate string matching algorithms to gain high performance at low cost. Further, we describe the architecture of the array and the architecture of the cell in detail in order to efficiently implement for both the preprocessing and searching phases of most string matching algorithms. Further, the architecture performs approximate string matching for complex patterns that contain don’t care, complement and classes symbols. We also simulate and evaluate the proposed architecture on a field programmable gate array (FPGA) device using the JHDL tool for synthesis and the Xilinx Foundation tools for mapping, placement, and routing. Finally, our programmable implementation achieves about 8–340 times faster execution than a desktop computer with a Pentium 4 3.5 GHz for all algorithms when the length of the pattern is 1024.  相似文献   

19.
本文提出了一种对XML 文本进行快速串匹配的算法- XMatch。在对于XML 文本的含路径信息的模式串匹配中,由于XML 文本的结构化特点,使得传统的串匹配算法不能直接有效的使用;而现有的大部分XML 内容筛选方法都是基于SAX 分析的事件驱动过程,效率普遍较低。XMatch 在对XML 文本的结构-schema 进行分析的同时,结合模式串的路径信息,建立一个扫描自动机的有限状态自动机;此外,算法还支持带循环引用路径信息的模式串匹配。XMatch 容易扩展,可以支持普通的结构化文本的串匹配。实验结果显示,本算法的效率比使用SAX事件驱动的方法有明显的提高。  相似文献   

20.
We present a new approach based on anagram hashing to handle globally the lexical variation in large and noisy text collections. Lexical variation addressed by spelling correction systems is primarily typographical variation. This is typically handled in a local fashion: given one particular text string some system of retrieving near-neighbors is applied, where near-neighbors are other text strings that differ from the particular string by a given number of characters. The difference in characters between the original string and one of its retrieved near-neighbors constitutes a particular character confusion. We present a global way of performing this action: for all possible particular character confusions given a particular edit distance, we sequentially identify all the pairs of text strings in the text collection that display a particular confusion. We work on large digitized corpora, which contain lexical variation due to both the OCR process and typographical or typesetting error and show that all these types of variation can be handled equally well in the framework we present. The character confusion-based prototype of Text-Induced Corpus Clean-up (ticcl) is compared to its focus word-based counterpart and evaluated on 6 years’ worth of digitized Dutch Parliamentary documents. The character confusion approach is shown to gain an order of magnitude in speed on its word-based counterpart on large corpora. Insights gained about the useful contribution of global corpus variation statistics are shown to also benefit the more traditional word-based approach to spelling correction. Final tests on a held-out set comprising the 1918 edition of the Dutch daily newspaper ‘Het Volk’ show that the system is not sensitive to domain variation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号