首页 | 官方网站   微博 | 高级检索  
     


Text classification method based on self-training and LDA topic models
Affiliation:1. Institute of Mathematical and Computer Science, University of São Paulo, São Carlos, SP, Brazil;2. Federal University of Mato Grosso do Sul, CPTL, Três Lagoas, MS, Brazil;1. College of Computer Science and Technology, Soochow University, No.1 Shizi Street, Suzhou, Jiangsu 215006, China;2. College of Computer Science and Technology, Jiangsu Normal University, No.101 Shanghai Rd, Tongshan New District, Xuzhou, Jiangsu 221116, China;1. College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China;2. Alibaba-Zhejiang University Joint Institute of Frontier Technologies, Hangzhou 310027, China;3. College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
Abstract:Supervised text classification methods are efficient when they can learn with reasonably sized labeled sets. On the other hand, when only a small set of labeled documents is available, semi-supervised methods become more appropriate. These methods are based on comparing distributions between labeled and unlabeled instances, therefore it is important to focus on the representation and its discrimination abilities. In this paper we present the ST LDA method for text classification in a semi-supervised manner with representations based on topic models. The proposed method comprises a semi-supervised text classification algorithm based on self-training and a model, which determines parameter settings for any new document collection. Self-training is used to enlarge the small initial labeled set with the help of information from unlabeled data. We investigate how topic-based representation affects prediction accuracy by performing NBMN and SVM classification algorithms on an enlarged labeled set and then compare the results with the same method on a typical TF-IDF representation. We also compare ST LDA with supervised classification methods and other well-known semi-supervised methods. Experiments were conducted on 11 very small initial labeled sets sampled from six publicly available document collections. The results show that our ST LDA method, when used in combination with NBMN, performed significantly better in terms of classification accuracy than other comparable methods and variations. In this manner, the ST LDA method proved to be a competitive classification method for different text collections when only a small set of labeled instances is available. As such, the proposed ST LDA method may well help to improve text classification tasks, which are essential in many advanced expert and intelligent systems, especially in the case of a scarcity of labeled texts.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号