首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Scene text detection plays a significant role in various applications,such as object recognition,document management,and visual navigation.The instance segmentation based method has been mostly used in existing research due to its advantages in dealing with multi-oriented texts.However,a large number of non-text pixels exist in the labels during the model training,leading to text mis-segmentation.In this paper,we propose a novel multi-oriented scene text detection framework,which includes two main modules:character instance segmentation (one instance corresponds to one character),and character flow construction (one character flow corresponds to one word).We use feature pyramid network(FPN) to predict character and non-character instances with arbitrary directions.A joint network of FPN and bidirectional long short-term memory (BLSTM) is developed to explore the context information among isolated characters,which are finally grouped into character flows.Extensive experiments are conducted on ICDAR2013,ICDAR2015,MSRA-TD500 and MLT datasets to demonstrate the effectiveness of our approach.The F-measures are 92.62%,88.02%,83.69% and 77.81%,respectively.  相似文献   

2.
Convolutional neural networks (CNNs) have had great success with regard to the object classification problem. For character classification, we found that training and testing using accurately segmented character regions with CNNs resulted in higher accuracy than when roughly segmented regions were used. Therefore, we expect to extract complete character regions from scene images. Text in natural scene images has an obvious contrast with its attachments. Many methods attempt to extract characters through different segmentation techniques. However, for blurred, occluded, and complex background cases, those methods may result in adjoined or over segmented characters. In this paper, we propose a scene word recognition model that integrates words from small pieces to entire after-cluster-based segmentation. The segmented connected components are classified as four types: background, individual character proposals, adjoined characters, and stroke proposals. Individual character proposals are directly inputted to a CNN that is trained using accurately segmented character images. The sliding window strategy is applied to adjoined character regions. Stroke proposals are considered as fragments of entire characters whose locations are estimated by a stroke spatial distribution system. Then, the estimated characters from adjoined characters and stroke proposals are classified by a CNN that is trained on roughly segmented character images. Finally, a lexicondriven integration method is performed to obtain the final word recognition results. Compared to other word recognition methods, our method achieves a comparable performance on Street View Text and the ICDAR 2003 and ICDAR 2013 benchmark databases. Moreover, our method can deal with recognizing text images of occlusion and improperly segmented text images.  相似文献   

3.
为了提高经典目标检测算法对自然场景文本定位的准确性,以及克服传统字符检测模型由于笔画间存在非连通性引起的汉字错误分割问题,提出了一种直接高效的自然场景汉字逼近定位方法。采用经典的EAST算法对场景图像中的文字进行检测。对初检的文字框进行调整使其更紧凑和更完整地包含文字,主要由提取各连通笔画成分、汉字分割和文字形状逼近三部分组成。矫正文字区域和识别文字内容。实验结果表明,提出的算法在保持平均帧率为3.1 帧/s的同时,对ICDAR2015、ICDAR2017-MLT和MSRA-TD500三个多方向数据集上文本定位任务中的F-score分别达到83.5%、72.8%和81.1%;消融实验验证了算法中各模块的有效性。在ICDAR2015数据集上的检测和识别综合评估任务中的性能也验证了该方法相比一些最新方法取得了更好的性能。  相似文献   

4.
针对自然场景中复杂背景干扰检测的问题,本文提出一种基于视觉感知机制的场景文字检测定位方法。人类视觉感知机制通常分为快速并行预注意步骤与慢速串行注意步骤。本文方法基于人类感知机制提出一种场景文字检测定位方法,该方法首先通过两种视觉显著性方法进行预注意步骤,然后利用笔画特征以及文字相互关系实现注意步骤。本文方法在ICDAR 2013与场景汉字数据集中均取得较有竞争力的结果,实验表明可以较好地用于复杂背景的自然场景英文和汉字的检测。  相似文献   

5.
Wu  Qin  Luo  Wenli  Chai  Zhilei  Guo  Guodong 《Applied Intelligence》2022,52(1):514-529

Since convolutional neural networks(CNNs) were applied to scene text detection, the accuracy of text detection has been improved a lot. However, limited by the receptive fields of regular CNNs and due to the large scale variations of texts in images, current text detection methods may fail to detect some texts well when dealing with more challenging text instances, such as arbitrarily shaped texts and extremely small texts. In this paper, we propose a new segmentation based scene text detector, which is equipped with deformable convolution and global channel attention. In order to detect texts of arbitrary shapes, our method replaces traditional convolutions with deformable convolutions, the sampling locations of deformable convolutions are deformed with augmented offsets so that it can better adapt to any shapes of texts, especially curved texts. To get more representative features for texts, an Adaptive Feature Selection module is introduced to better exploit text content through global channel attention. Meanwhile, a scale-aware loss, which adjusts the weights of text instances with different sizes, is formulated to solve the text scale variation problem. Experiments on several standard benchmarks, including ICDAR2015, SCUT-CTW1500, ICDAR2017-MLT and MSRA-TD500 verify the superiority of the proposed method.

  相似文献   

6.
Color and strokes are the salient features of text regions in an image. In this work, we use both these features as cues, and introduce a novel energy function to formulate the text binarization problem. The minimum of this energy function corresponds to the optimal binarization. We minimize the energy function with an iterative graph cut-based algorithm. Our model is robust to variations in foreground and background as we learn Gaussian mixture models for color and strokes in each iteration of the graph cut. We show results on word images from the challenging ICDAR 2003/2011, born-digital image and street view text datasets, as well as full scene images containing text from ICDAR 2013 datasets, and compare our performance with state-of-the-art methods. Our approach shows significant improvements in performance under a variety of performance measures commonly used to assess text binarization schemes. In addition, our method adapts to diverse document images, like text in videos, handwritten text images.  相似文献   

7.
王寅同  郑豪  常合友  李朔 《控制与决策》2023,38(7):1825-1834
中文手写文本识别是模式识别领域中的研究热点问题之一,其存在字符类别数量多、书写风格差异大和训练数据集标记难等问题.针对上述问题,提出无切分无循环的残差注意网络结构用于端到端手写文本识别.首先,以ResNet-26为主体结构,使用深度可分离卷积提取有意义特征,残差注意门控模块提升文本图像中的关键区域的重要性;其次,采用批量双线性插值模型对输入表征进行拉伸-挤压,实现二维文本表征到一维文本行表征的文本行上采样;最后,以连接时序分类作为识别模型的损失函数,实现高层次抽取表征与字符序列标记的对应关系.在CASIA-HWDB2.x和ICDAR2013两个数据集上进行实验研究,结果表明,所提方法在没有任何字符或文本行的位置信息时能够有效地实现端到端手写文本识别,且优于现有的方法.  相似文献   

8.
Recognizing characters extracted from natural scene images is quite challenging due to the high degree of intraclass variation. In this paper, we propose a multi-scale graph-matching based kernel for scene character recognition. In order to capture the inherently distinctive structures of characters, each image is represented by several graphs associated with multi-scale image grids. The similarity between two images is thus defined as the optimum energy by matching two graphs(images), which finds the best match for each node in the graph while also preserving the spatial consistency across adjacent nodes. The computed similarity is suitable to construct a kernel for support vector machine(SVM). Multiple kernels acquired by matching graphs with multi-scale grids are combined so that the final kernel is more robust. Experimental results on challenging Chars74k and ICDAR03-CH datasets show that the proposed method performs better than the state of the art methods.  相似文献   

9.
由于自然场景中的文字具有较大的类内间距, 因此识别场景文字具有很大的挑战性. 本文提出了一种基于多尺度图匹配核的场景单字识别方法. 为了利用字符特有的结构特征, 将每幅图像表示为基于不同网格划分的无向图, 通过计算两个无向图之间图匹配的最优能量值来得到两幅图像的相似度, 由于图匹配在计算每个节点的最佳匹配节点时也考虑了相邻节点之间的空间位置约束, 因此可以应对具有一定形变的文字. 通过图匹配得到的两幅图像之间的相似度很适合用来构造支持向量机的核矩阵. 本文将不同尺度网格划分下得到的核矩阵进行多核融合, 使得最终得到的核矩阵更加地鲁棒. 在国际公开场景文字识别数据集Chars74k和ICDAR03-CH上的实验结果表明, 本方法取得了高于国际上已发表的其他方法的单字识别率.  相似文献   

10.
针对传统的最大稳定极值区域(MSER)方法无法很好地提取低对比度图像文本区域的问题,提出一种新的基于边缘增强的场景文本检测方法。首先,通过方向梯度值(HOG)有效地改进MSER方法,增强MSER方法对低对比度图像的鲁棒性,并在色彩空间分别求取最大稳定极值区域;其次,利用贝叶斯模型进行分类,主要采用笔画宽度、边缘梯度方向、拐角点三个平移旋转不变性特征剔除非字符区域;最后,利用字符的几何特性将字符整合成文本行,在公共数据集国际分析与文档识别(ICDAR)2003和ICDAR 2013评估了算法性能。实验结果表明,基于色彩空间的边缘增强的MSER方法能够解决背景复杂和不能从对比度低的场景图像中正确提取文本区域的问题。基于贝叶斯模型的分类方法在小样本的情况下能够更好地筛选字符,实现较高的召回率。相比传统的MSER进行文本检测的方法,所提方法提高了系统的检测率和实时性。  相似文献   

11.
场景文本检测是场景文本识别中重要的一步,也是一个具有挑战性的问题。不同于一般的目标检测,场景文本检测的主要挑战在于自然场景图像中的文本具有任意方向,小的尺寸,以及多种宽高比。论文在TextBoxes[8]的基础上进行改进,提出了一个适用于任意方向文本的检测器,命名为OSTD(Oriented Scene Text Detector),可以有效且准确地检测自然场景中任意方向的文本。论文在公共数据集上对提出OSTD的进行评估。所有实验结果都表明,无论在准确性,还是实时性方面OSTD都是极具竞争力的方法。在1024×1024的ICDAR2015 Incidental Text数据集[16]上,OSTD的F-Measure=0.794,FPS=10.7。  相似文献   

12.
Text recognition in natural scene images is a challenging task that has recently been garnering increased research attention. In this paper, we propose a method for recognizing text by utilizing the layout consistency of a text string. We estimate the layout (four lines of a text string) using initial character extraction and recognition result. On the basis of the layout consistency across a word, we perform character extraction and recognition again using four lines, which is more accurate than the first process. Our layout estimation method is different from previous methods in terms of exploiting character recognition results and its use of a class-conditional layout model. More accurate and robust estimation is achieved, and it can be used to refine character extraction and recognition. We call this two-way process—from extraction and recognition to layout, and from layout to extraction and recognition—“bidirectional” to discriminate it from previous feedback refinement approaches. Experimental results demonstrate that our bidirectional processes provide a boost to the performance of word recognition.  相似文献   

13.
In today’s real world, an important research part in image processing is scene text detection and recognition. Scene text can be in different languages, fonts, sizes, colours, orientations and structures. Moreover, the aspect ratios and layouts of a scene text may differ significantly. All these variations appear assignificant challenges for the detection and recognition algorithms that are considered for the text in natural scenes. In this paper, a new intelligent text detection and recognition method for detectingthe text from natural scenes and forrecognizing the text by applying the newly proposed Conditional Random Field-based fuzzy rules incorporated Convolutional Neural Network (CR-CNN) has been proposed. Moreover, we have recommended a new text detection method for detecting the exact text from the input natural scene images. For enhancing the presentation of the edge detection process, image pre-processing activities such as edge detection and color modeling have beenapplied in this work. In addition, we have generated new fuzzy rules for making effective decisions on the processes of text detection and recognition. The experiments have been directedusing the standard benchmark datasets such as the ICDAR 2003, the ICDAR 2011, the ICDAR 2005 and the SVT and have achieved better detection accuracy intext detection and recognition. By using these three datasets, five different experiments have been conducted for evaluating the proposed model. And also, we have compared the proposed system with the other classifiers such as the SVM, the MLP and the CNN. In these comparisons, the proposed model has achieved better classification accuracywhen compared with the other existing works.  相似文献   

14.
Text detection is important in the retrieval of texts from digital pictures, video databases and webpages. However, it can be very challenging since the text is often embedded in a complex background. In this paper, we propose a classification-based algorithm for text detection using a sparse representation with discriminative dictionaries. First, the edges are detected by the wavelet transform and scanned into patches by a sliding window. Then, candidate text areas are obtained by applying a simple classification procedure using two learned discriminative dictionaries. Finally, the adaptive run-length smoothing algorithm and projection profile analysis are used to further refine the candidate text areas. The proposed method is evaluated on the Microsoft common test set, the ICDAR 2003 text locating set, and an image set collected from the web. Extensive experiments show that the proposed method can effectively detect texts of various sizes, fonts and colors from images and videos.  相似文献   

15.
The segmentation of touching characters is still a challenging task, posing a bottleneck for offline Chinese handwriting recognition. In this paper, we propose an effective over-segmentation method with learning-based filtering using geometric features for single-touching Chinese handwriting. First, we detect candidate cuts by skeleton and contour analysis to guarantee a high recall rate of character separation. A filter is designed by supervised learning and used to prune implausible cuts to improve the precision. Since the segmentation rules and features are independent of the string length, the proposed method can deal with touching strings with more than two characters. The proposed method is evaluated on both the character segmentation task and the text line recognition task. The results on two large databases demonstrate the superiority of the proposed method in dealing with single-touching Chinese handwriting.  相似文献   

16.
Independent travel is a well-known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a laboratory, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First, we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.  相似文献   

17.
Lin  Hanyang  Zhan  Yongzhao  Liu  Shiqin  Ke  Xiao  Chen  Yuzhong 《Applied Intelligence》2022,52(13):15259-15277

With the widespread use of mobile Internet, mobile payment has become a part of daily life, and bank card recognition in natural scenes has become a hot topic. Although printed character recognition has achieved remarkable success in recent years, bank card recognition is not limited to traditional printed character recognition. There are two types of bank cards: unembossed bank cards, such as most debit cards which usually use printed characters, and embossed bank cards, such as most credit cards which mainly use raised characters. Recognition of raised characters is very challenging due to its own characteristics, and there is a lack of fast and good methods to handle it. To better recognize raised characters, we propose an effective method based on deep learning to detect and recognize bank cards in complex natural scenes. The method can accurately recognize the card number characters on embossed and unembossed bank cards. First, to break the limitation that YOLOv3 algorithm is usually used for object detection, we propose a novel approach that enables YOLOv3 to be used not only for bank card detection and classification, but also for character recognition. The CANNYLINES algorithm is used for rectification and the Scharr operator is introduced to locate the card number region. The proposed method can satisfy bank card detection, classification and character recognition in complex natural scenes, such as complex backgrounds, distorted card surfaces, uneven illumination, and characters with the same or similar color to the background. To further improve the recognition accuracy, a printed character recognition model based on ResNet-32 is proposed for the unembossed bank cards. According to the color and morphological characteristics of embossed bank cards, raised character recognition model combining traditional morphological methods and LeNet-5 convolutional neural network is proposed for the embossed bank cards. The experimental results on the collected bank card dataset and bank card number dataset show that our proposed method can effectively detect and identify different types of bank cards. The accuracy of the detection and classification of bank cards reaches 100%. The accuracy of the raised characters recognition on the embossed bank card is 99.31%, and the accuracy of the printed characters recognition on the unembossed bank card reaches 100%.

  相似文献   

18.
19.
20.
In this paper, we present a system that automatically translates Arabic text embedded in images into English. The system consists of three components: text detection from images, character recognition, and machine translation. We formulate the text detection as a binary classification problem and apply gradient boosting tree (GBT), support vector machine (SVM), and location-based prior knowledge to improve the F1 score of text detection from 78.95% to 87.05%. The detected text images are processed by off-the-shelf optical character recognition (OCR) software. We employ an error correction model to post-process the noisy OCR output, and apply a bigram language model to reduce word segmentation errors. The translation module is tailored with compact data structure for hand-held devices. The experimental results show substantial improvements in both word recognition accuracy and translation quality. For instance, in the experiment of Arabic transparent font, the BLEU score increases from 18.70 to 33.47 with use of the error correction module.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号