首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Color and strokes are the salient features of text regions in an image. In this work, we use both these features as cues, and introduce a novel energy function to formulate the text binarization problem. The minimum of this energy function corresponds to the optimal binarization. We minimize the energy function with an iterative graph cut-based algorithm. Our model is robust to variations in foreground and background as we learn Gaussian mixture models for color and strokes in each iteration of the graph cut. We show results on word images from the challenging ICDAR 2003/2011, born-digital image and street view text datasets, as well as full scene images containing text from ICDAR 2013 datasets, and compare our performance with state-of-the-art methods. Our approach shows significant improvements in performance under a variety of performance measures commonly used to assess text binarization schemes. In addition, our method adapts to diverse document images, like text in videos, handwritten text images.  相似文献   

2.
Document images often suffer from different types of degradation that renders the document image binarization a challenging task. This paper presents a document image binarization technique that segments the text from badly degraded document images accurately. The proposed technique is based on the observations that the text documents usually have a document background of the uniform color and texture and the document text within it has a different intensity level compared with the surrounding document background. Given a document image, the proposed technique first estimates a document background surface through an iterative polynomial smoothing procedure. Different types of document degradation are then compensated by using the estimated document background surface. The text stroke edge is further detected from the compensated document image by using L1-norm image gradient. Finally, the document text is segmented by a local threshold that is estimated based on the detected text stroke edges. The proposed technique was submitted to the recent document image binarization contest (DIBCO) held under the framework of ICDAR 2009 and has achieved the top performance among 43 algorithms that are submitted from 35 international research groups.  相似文献   

3.
一种基于迭代阈值法的身份证图像二值化算法研究   总被引:12,自引:0,他引:12  
针对身份证扫描图像受激光防伪阴影网格线影响较大的特点,对图像二值化的方法进行了深入分析和研究.提出了一种基于像素邻域特征的迭代阈值方法。该算法模型简单、易于实现.能滤除噪声、使字符笔划清晰地从背景中分割出来.获得了较好的二值化效果。  相似文献   

4.
图像二值化算法通过消除文档背景噪声将文本与背景分割开。针对古籍图像提出一种基于局部对比度和相位保持降噪的古籍图像二值化算法。根据归一化局部最大值最小值来构造古籍图像局部对比度,同时对古籍图像进行相位保持降噪。将局部对比度图像和降噪图像相结合来识别文本笔划像素。通过局部窗口内所检测的文本笔划像素估计局部阈值从而计算古籍背景修复模板。用图像修复算法和形态学闭操作来估计古籍背景。用所估计背景来增强古籍图像,采用Howe算法对增强后的古籍图像进行二值化求得最终结果。该算法在DIBCO2016、DIBCO2017和DIBCO2018数据集中的实验结果均优于其他二值化算法。  相似文献   

5.
Document binarization is an important technique in document image analysis and recognition. Generally, binarization methods are ineffective for degraded images. Several binarization methods have been proposed; however, none of them are effective for historical and degraded document images. In this paper, a new binarization method is proposed for degraded document images. The proposed method based on the variance between pixel contrast, it consists of four stages: pre-processing, geometrical feature extraction, feature selection, and post-processing. The proposed method was evaluated based on several visual and statistical experiments. The experiments were conducted using five International Document Image Binarization Contest benchmark datasets specialized for binarization testing. The results compared with five adaptive binarization methods: Niblack, Sauvola thresholding, Sauvola compound algorithm, NICK, and Bataineh. The results show that the proposed method performs better than other methods in all binarization cases.  相似文献   

6.
目的 图文数据在不同应用场景下的最佳分类方法各不相同,而现有语义级融合算法大多适用于图文数据分类方法相同的情况,若将其应用于不同分类方法时由于分类决策基准不统一导致分类结果不理想,大幅降低了融合分类性能。针对这一问题,提出基于加权KNN的融合分类方法。方法 首先,分别利用softmax多分类器和多分类支持向量机(SVM)实现图像和文本分类,同时利用训练数据集各类别分类精确度加权后的图像和文本正确判别实例的分类决策值分别构建图像和文本KNN模型;再分别利用其对测试实例的图像和文本分类决策值进行预测,通过最邻近k个实例属于各类别的数目确定测试实例的分类概率,统一图像和文本的分类决策基准;最后利用训练数据集中图像和文本分类正确的数目确定测试实例中图像和文本分类概率的融合系数,实现统一分类决策基准下的图文数据融合。结果 在Attribute Discovery数据集的图像文本对上进行实验,并与基准方法进行比较,实验结果表明,本文融合算法的分类精确度高于图像和文本各自的分类精确度,且平均分类精确度相比基准方法提高了4.45%;此外,本文算法对图文信息的平均整合能力相比基准方法提高了4.19%。结论 本文算法将图像和文本不同分类方法的分类决策基准统一化,实现了图文数据的有效融合,具有较强的信息整合能力和较好的融合分类性能。  相似文献   

7.
Aviv Segev 《Expert Systems》2010,27(4):247-258
Abstract: The analysis of medical documents necessitates context recognition for diverse purposes such as classification, performance analysis and decision making. Traditional methods of context recognition have focused on the textual part of documents. Images, however, provide a rich source of information that can support the context recognition process. A method is proposed for integrating computer vision in context recognition using the web as a knowledge base. The method is implemented on medical case studies to determine the main symptoms or achieve possible diagnoses. In experiments the method for integrating computer vision in context recognition achieves better results than term frequency and inverse document frequency and only context recognition. The proposed method can serve as a basis for an image and text based decision support system to assist the physician in reviewing medical records.  相似文献   

8.
In this paper, we propose a novel binarization method for document images produced by cameras. Such images often have varying degrees of brightness and require more careful treatment than merely applying a statistical method to obtain a threshold value. To resolve the problem, the proposed method divides an image into several regions and decides how to binarize each region. The decision rules are derived from a learning process that takes training images as input. Tests on images produced under normal and inadequate illumination conditions show that our method yields better visual quality and better OCR performance than three global binarization methods and four locally adaptive binarization methods.  相似文献   

9.
在经典的Niblack方法的基础上提出了一种改进的针对退化文本图像的二值化方法,该方法仅在文本区域周围较小范围内进行局部阈值计算,在大大减少运算量的同时,克服了Niblack方法容易产生大量背景噪声的缺点,与另外一种同样基于Niblack的Sauvola方法相比较,对于低对比度的退化文本图像有更好的适应性。  相似文献   

10.
Binarization plays an important role in document image processing, especially in degraded documents. For degraded document images, adaptive binarization methods often incorporate local information to determine the binarization threshold for each individual pixel in the document image. We propose a two-stage parameter-free window-based method to binarize the degraded document images. In the first stage, an incremental scheme is used to determine a proper window size beyond which no substantial increase in the local variation of pixel intensities is observed. In the second stage, based on the determined window size, a noise-suppressing scheme delivers the final binarized image by contrasting two binarized images which are produced by two adaptive thresholding schemes which incorporate the local mean gray and gradient values. Empirical results demonstrate that the proposed method is competitive when compared to the existing adaptive binarization methods and achieves better performance in precision, accuracy, and F-measure.  相似文献   

11.
Binary image representation is essential format for document analysis. In general, different available binarization techniques are implemented for different types of binarization problems. The majority of binarization techniques are complex and are compounded from filters and existing operations. However, the few simple thresholding methods available cannot be applied to many binarization problems. In this paper, we propose a local binarization method based on a simple, novel thresholding method with dynamic and flexible windows. The proposed method is tested on selected samples called the DIBCO 2009 benchmark dataset using specialized evaluation techniques for binarization processes. To evaluate the performance of our proposed method, we compared it with the Niblack, Sauvola and NICK methods. The results of the experiments show that the proposed method adapts well to all types of binarization challenges, can deal with higher numbers of binarization problems and boosts the overall performance of the binarization.  相似文献   

12.
Binarization of document images with poor contrast, strong noise, complex patterns, and variable modalities in the gray-scale histograms is a challenging problem. A new binarization algorithm has been developed to address this problem for personal cheque images. The main contribution of this approach is optimizing the binarization of a part of the document image that suffers from noise interference, referred to as the Target Sub-Image (TSI), using information easily extracted from another noise-free part of the same image, referred to as the Model Sub-Image (MSI). Simple spatial features extracted from MSI are used as a model for handwriting strokes. This model captures the underlying characteristics of the writing strokes, and is invariant to the handwriting style or content. This model is then utilized to guide the binarization in the TSI. Another contribution is a new technique for the structural analysis of document images, which we call “Wavelet Partial Reconstruction” (WPR). The algorithm was tested on 4,200 cheque images and the results show significant improvement in binarization quality in comparison with other well-established algorithms. Received: October 10, 2001 / Accepted: May 7, 2002 This research was supported in part by NCR and NSERC's industrial postgraduate scholarship No. 239464. A simplified version of this paper has been presented at ICDAR 2001 [3].  相似文献   

13.
In this paper, we propose a new algorithm for the binarization of degraded document images. We map the image into a 2D feature space in which the text and background pixels are separable, and then we partition this feature space into small regions. These regions are labeled as text or background using the result of a basic binarization algorithm applied on the original image. Finally, each pixel of the image is classified as either text or background based on the label of its corresponding region in the feature space. Our algorithm splits the feature space into text and background regions without using any training dataset. In addition, this algorithm does not need any parameter setting by the user and is appropriate for various types of degraded document images. The proposed algorithm demonstrated superior performance against six well-known algorithms on three datasets.  相似文献   

14.
Document image binarization involves converting gray level images into binary images, which is a feature that has significantly impacted many portable devices in recent years, including PDAs and mobile camera phones. Given the limited memory space and the computational power of portable devices, reducing the computational complexity of an embedded system is of priority concern. This work presents an efficient document image binarization algorithm with low computational complexity and high performance. Integrating the advantages of global and local methods allows the proposed algorithm to divide the document image into several regions. A threshold surface is then constructed based on the diversity and the intensity of each region to derive the binary image. Experimental results demonstrate the effectiveness of the proposed method in providing a promising binarization outcome and low computational cost.  相似文献   

15.
文本图像二值化是光学字符识别的关键步骤,但低质量文本图像背景噪声复杂,且图像全局上下文信息以及深层抽象信息难以获取,使得最终的二值化结果中文字区域分割不精确、文字的形状和轮廓等特征表达不足,从而导致二值化效果不佳。为此,提出一种基于改进U-Net网络的低质量文本图像二值化方法。采用适合小数据集的分割网络U-Net作为骨干模型,选择预训练的VGG16作为U-Net的编码器以提升模型的特征提取能力。通过融合轻量级全局上下文块的U-Net瓶颈层实现特征图的全局上下文建模。在U-Net解码器的各上采样块中融合残差跳跃连接,以提升模型的特征还原能力。从上述编码器、瓶颈层和解码器3个方面分别对U-Net进行改进,从而实现更精确的文本图像二值化。在DIBCO 2016—2018数据集上的实验结果表明,相较Otsu、Sauvola等方法,该方法能够实现更好的去噪效果,其二值化结果中保留了更多的细节特征,文字的形状和轮廓更精确、清晰。  相似文献   

16.
In this work, a multi-scale binarization framework is introduced, which can be used along with any adaptive threshold-based binarization method. This framework is able to improve the binarization results and to restore weak connections and strokes, especially in the case of degraded historical documents. This is achieved thanks to localized nature of the framework on the spatial domain. The framework requires several binarizations on different scales, which is addressed by introduction of fast grid-based models. This enables us to explore high scales which are usually unreachable to the traditional approaches. In order to expand our set of adaptive methods, an adaptive modification of Otsu's method, called AdOtsu, is introduced. In addition, in order to restore document images suffering from bleed-through degradation, we combine the framework with recursive adaptive methods. The framework shows promising performance in subjective and objective evaluations performed on available datasets.  相似文献   

17.
Almost all binarization methods have a few parameters that require setting. However, they do not usually achieve their upper-bound performance unless the parameters are individually set and optimized for each input document image. In this work, a learning framework for the optimization of the binarization methods is introduced, which is designed to determine the optimal parameter values for a document image. The framework, which works with any binarization method, has a standard structure, and performs three main steps: (i) extracts features, (ii) estimates optimal parameters, and (iii) learns the relationship between features and optimal parameters. First, an approach is proposed to generate numerical feature vectors from 2D data. The statistics of various maps are extracted and then combined into a final feature vector, in a nonlinear way. The optimal behavior is learned using support vector regression (SVR). Although the framework works with any binarization method, two methods are considered as typical examples in this work: the grid-based Sauvola method, and Lu’s method, which placed first in the DIBCO’09 contest. The experiments are performed on the DIBCO’09 and H-DIBCO’10 datasets, and combinations of these datasets with promising results.  相似文献   

18.
In this paper, we present an effective approach for grouping text lines in online handwritten Japanese documents by combining temporal and spatial information. With decision functions optimized by supervised learning, the approach has few artificial parameters and utilizes little prior knowledge. First, the strokes in the document are grouped into text line strings according to off-stroke distances. Each text line string, which may contain multiple lines, is segmented by optimizing a cost function trained by the minimum classification error (MCE) method. At the temporal merge stage, over-segmented text lines (caused by stroke classification errors) are merged with a support vector machine (SVM) classifier for making merge/non-merge decisions. Last, a spatial merge module corrects the segmentation errors caused by delayed strokes. Misclassified text/non-text strokes (stroke type classification precedes text line grouping) can be corrected at the temporal merge stage. To evaluate the performance of text line grouping, we provide a set of performance metrics for evaluating from multiple aspects. In experiments on a large number of free form documents in the Tokyo University of Agriculture and Technology (TUAT) Kondate database, the proposed approach achieves the entity detection metric (EDM) rate of 0.8992 and the edit-distance rate (EDR) of 0.1114. For grouping of pure text strokes, the performance reaches EDM of 0.9591 and EDR of 0.0669.  相似文献   

19.
Most of the recently proposed text entry methods for touch screen devices are stroke-based: the traditional tapping interaction is being replaced with a more natural gesture, performed through a pointer (pen or finger) on a soft keyboard. These methods need an effective technique to interpret user strokes, in order to correctly obtain the text the user intends to enter. KeyScretch is a recent text entry method based on menu-augmented soft keyboards. The method introduces a new way of interacting with radial menus through compound strokes. In this paper we present the technology used for recognizing these strokes. In particular, the design of different recognizers is presented and their performances are compared. The evaluation shows that geometric stroke recognition techniques, associated to other calibrations, can significantly improve the accuracy achievable using a simple target-based method.  相似文献   

20.
This article proposes an approach to predict the result of binarization algorithms on a given document image according to its state of degradation. Indeed, historical documents suffer from different types of degradation which result in binarization errors. We intend to characterize the degradation of a document image by using different features based on the intensity, quantity and location of the degradation. These features allow us to build prediction models of binarization algorithms that are very accurate according to $R^2$ values and p values. The prediction models are used to select the best binarization algorithm for a given document image. Obviously, this image-by-image strategy improves the binarization of the entire dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号