首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
针对短文本的特征稀疏性和上下文依赖性两个问题,提出一种基于隐含狄列克雷分配模型的短文本分类方法。利用模型生成的主题,一方面区分相同词的上下文,降低权重;另一方面关联不同词以减少稀疏性,增加权重。采用K近邻方法对自动抓取的网易页面标题数据进行分类,实验表明新方法在分类性能上比传统的向量空间模型和基于主题的相似性度量分别高5%和2.5%左右。  相似文献   

2.
This paper proposes a graphical model termed as Local-Space-Constraint LDA (LSC-LDA) for image classification. The existing LDA based methods using the Bag-of-Words (BoW) representation ignore the spatial information of the image. To address this problem, the image is partitioned into several regions and a latent variable is assigned to each region. We construct the supervised LSC-LDA termed as Class-Supervised LSC-LDA (CS-LSC-LDA) to learn class-specific topics. During the parameter learning step, the variational inference is employed to approximate the proposed model. The maximum a posterior probability (MAP) measure is used to compute the parameters. The effectiveness of the proposed model is demonstrated through the extensive evaluations in three well-known datasets. It observes that our model outperforms the existing LDA based models.  相似文献   

3.
Large, unlabeled datasets are abundant nowadays, but getting labels for those datasets can be expensive and time-consuming. Crowd labeling is a crowdsourcing approach for gathering such labels from workers whose suggestions are not always accurate. While a variety of algorithms exist for this purpose, we present crowd labeling latent Dirichlet allocation (CL-LDA), a generalization of latent Dirichlet allocation that can solve a more general set of crowd labeling problems. We show that it performs as well as other methods and at times better on a variety of simulated and actual datasets while treating each label as compositional rather than indicating a discrete class. In addition, prior knowledge of workers’ abilities can be incorporated into the model through a structured Bayesian framework. We then apply CL-LDA to the EEG independent component labeling dataset, using its generalizations to further explore the utility of the algorithm. We discuss prospects for creating classifiers from the generated labels.  相似文献   

4.
The success of Semantic Web will heavily rely on the availability of formal ontologies to structure machine understanding data. However, there is still a lack of general methodologies for ontology automatic learning and population, i.e. the generation of domain ontologies from various kinds of resources by applying natural language processing and machine learning techniques In this paper, the authors present an ontology learning and population system that combines both statistical and semantic methodologies. Several experiments have been carried out, demonstrating the effectiveness of the proposed approach.  相似文献   

5.
Previous work on the one-class collaborative filtering (OCCF) problem can be roughly categorized into pointwise methods, pairwise methods, and content-based methods. A fundamental assumption of these approaches is that all missing values in the user-item rating matrix are considered negative. However, this assumption may not hold because the missing values may contain negative and positive examples. For example, a user who fails to give positive feedback about an item may not necessarily dislike it; he may simply be unfamiliar with it. Meanwhile, content-based methods, e.g. collaborative topic regression (CTR), usually require textual content information of the items, and thus their applicability is largely limited when the text information is not available. In this paper, we propose to apply the latent Dirichlet allocation (LDA) model on OCCF to address the above-mentioned problems. The basic idea of this approach is that items are regarded as words, users are considered as documents, and the user-item feedback matrix constitutes the corpus. Our model drops the strong assumption that missing values are all negative and only utilizes the observed data to predict a user’s interest. Additionally, the proposed model does not need content information of the items. Experimental results indicate that the proposed method outperforms previous methods on various ranking-oriented evaluation metrics. We further combine this method with a matrix factorization-based method to tackle the multi-class collaborative filtering (MCCF) problem, which also achieves better performance on predicting user ratings.  相似文献   

6.
Feature location is a program comprehension activity, the goal of which is to identify source code entities that implement a functionality. Recent feature location techniques apply text retrieval models such as latent Dirichlet allocation (LDA) to corpora built from text embedded in source code. These techniques are highly configurable, and the literature offers little insight into how different configurations affect their performance. In this paper we present a study of an LDA based feature location technique (FLT) in which we measure the performance effects of using different configurations to index corpora and to retrieve 618 features from 6 open source Java systems. In particular, we measure the effects of the query, the text extractor configuration, and the LDA parameter values on the accuracy of the LDA based FLT. Our key findings are that exclusion of comments and literals from the corpus lowers accuracy and that heuristics for selecting LDA parameter values in the natural language context are suboptimal in the source code context. Based on the results of our case study, we offer specific recommendations for configuring the LDA based FLT.  相似文献   

7.
8.
9.
10.
针对现实文本分类环境下通常仅有少量标记样本而影响分类精度的问题,提出了一种基于概率主题模型潜在 Dirichlet 分配的分类算法。借助标准词频逆文档频率函数将每个文档表示成术语权重向量;利用概率主题模型预处理以简化文档,并从文档中提取术语;再利用潜在 Dirichlet 分配模型进行关系学习,构建基于图的分类器完成分类。在公开的 Reuters-21578资源库上的分类实验评估了该方法的有效性,相比分类效果较好的支持向量机,该方法在大部分情况下能够取得更高的分类精度。  相似文献   

11.
分层分布式狄利克雷分布(HD-LDA)算法是一个对潜在狄利克雷分布(LDA)进行改进的基于概率增长模型的文本分类算法,与只能在单机上运行的LDA算法相比,可以运行在分布式框架下,进行分布式并行处理。Mahout在Hadoop框架下实现了HD-LDA算法,但是因为单节点算法的计算量大,仍然存在对大数据分类运行时间太长的问题。而大规模文本集合分散到多个节点上迭代推导,单个节点上文档集合的推导仍是顺序进行的,所以处理大规模文本集合时仍然需要很长时间才能完成全部文本的分类。为此,提出将Hadoop与图形处理器(GPU)相结合,将单节点文本集合的推导过程转移到GPU上运行,实现单节点多个文档并行推导,利用多台并行的GPU对HD-LDA算法进行加速。应用结果表明,使用该方法能使分布式框架下的HD-LDA算法对大规模文本集合处理达到7倍的加速比。  相似文献   

12.
基于加权隐含狄利克雷分配模型的新闻话题挖掘方法   总被引:2,自引:0,他引:2  
李湘东  巴志超  黄莉 《计算机应用》2014,34(5):1354-1359
针对传统新闻话题挖掘准确率不高、话题可解释性差等问题,结合新闻报道的体例结构特点,提出一种基于加权隐含狄利克雷分配(LDA)模型的新闻话题挖掘方法。首先从不同角度改进词汇权重并构造复合权值,扩展LDA模型生成特征词的过程,以获取表意性较强的词汇;其次,将类别区分词(CDW)方法应用于建模结果的词序优化上,以消除话题歧义和噪声、提高话题的可解释性;最后,依据模型话题概率分布的数学特性,从文档对话题的贡献度以及话题权值概率角度对话题进行量化计算,以获取热门话题。仿真实验表明:与传统LDA模型相比,改进方法的漏报率、误报率分别平均降低1.43%、0.16%,最小标准代价平均降低2.68%,验证了该方法的可行性和有效性。  相似文献   

13.
主题模型LDA的多文档自动文摘   总被引:3,自引:0,他引:3  
近年来使用概率主题模型表示多文档文摘问题受到研究者的关注.LDA (latent dirichlet allocation)是主题模型中具有代表性的概率生成性模型之一.提出了一种基于LDA的文摘方法,该方法以混乱度确定LDA模型的主题数目,以Gibbs抽样获得模型中句子的主题概率分布和主题的词汇概率分布,以句子中主题权重的加和确定各个主题的重要程度,并根据LDA模型中主题的概率分布和句子的概率分布提出了2种不同的句子权重计算模型.实验中使用ROUGE评测标准,与代表最新水平的SumBasic方法和其他2种基于LDA的多文档自动文摘方法在通用型多文档摘要测试集DUC2002上的评测数据进行比较,结果表明提出的基于LDA的多文档自动文摘方法在ROUGE的各个评测标准上均优于SumBasic方法,与其他基于LDA模型的文摘相比也具有优势.  相似文献   

14.
Twitter provides search services to help people find users to follow by recommending popular users or the friends of their friends. However, these services neither offer the most relevant users to follow nor provide a way to find the most interesting tweet messages for each user. Recently, collaborative filtering techniques for recommendations based on friend relationships in social networks have been widely investigated. However, since such techniques do not work well when friend relationships are not sufficient, we need to take advantage of as much other information as possible to improve the performance of recommendations.In this paper, we propose TWILITE, a recommendation system for Twitter using probabilistic modeling based on latent Dirichlet allocation which recommends top-K users to follow and top-K tweets to read for a user. Our model can capture the realistic process of posting tweet messages by generalizing an LDA model as well as the process of connecting to friends by utilizing matrix factorization. We next develop an inference algorithm based on the variational EM algorithm for learning model parameters. Based on the estimated model parameters, we also present effective personalized recommendation algorithms to find the users to follow as well as the interesting tweet messages to read. The performance study with real-life data sets confirms the effectiveness of the proposed model and the accuracy of our personalized recommendations.  相似文献   

15.
徐红艳  王丹  王富海  王嵘冰 《计算机应用》2019,39(11):3288-3292
用户相关性度量是异构信息网络研究的基础与核心。现有的用户相关性度量方法由于未充分开展多维度分析和链路分析,其准确性尚存在提升空间。为此,提出了一种融合狄利克雷分布(LDA)与元路径分析的用户相关性度量方法。首先利用LDA进行主题建模,通过分析网络中节点的内容来计算节点的相关性;然后,引入元路径来刻画节点间关系类型,通过关联度量(DPRel)方法对异构信息网络中的用户进行相关性测量;接着,将节点的相关性融入到用户相关性度量计算中;最后,采用IMDB真实电影数据集进行实验,将所提方法和嵌入LDA主题模型的协同过滤推荐方法(ULR-CF)、基于元路径的相关性度量方法(PathSim)进行了对比分析。实验结果表明,所提方法能够克服数据稀疏性弊端,提高用户相关性度量的准确性。  相似文献   

16.
针对传统K-means算法初始聚类中心选择的随机性可能导致迭代次数增加、陷入局部最优和聚类结果不稳定现象的缺陷,提出一种基于隐含狄利克雷分布(LDA)主题概率模型的初始聚类中心选择算法。该算法选择蕴含在文本集中影响程度最大的前m个主题,并在这m个主题所在的维度上对文本集进行初步聚类,从而找到聚类中心,然后以这些聚类中心为初始聚类中心对文本集进行所有维度上的聚类,理论上保证了选择的初始聚类中心是基于概率可确定的。实验结果表明改进后算法聚类迭代次数明显减少,聚类结果更准确。  相似文献   

17.
Defining valid patents in a particular technological field is an indispensable step in patent analysis. To minimise the risk of missing valid patents, domain experts manually exclude irrelevant patents, known as noise patents, from an initial patent set derived using a loose retrieval query. However, this task has become time-consuming and labour intensive due to the increasing number of patents and rising complexity of technological knowledge. This study proposes a semi-automated approach to noise patent filtering based on information entropy theory and latent Dirichlet allocation. The proposed approach comprises four discrete steps: (1) structuring patents using a term-weighting method; (2) recommending noise patent seeds based on the information quantity of patents in terms of focal keyword groups; (3) measuring text similarities for patent clustering using latent Dirichlet allocation; and (4) identifying potential noise patent clusters with respect to the noise patent seeds. Our case study confirms that the proposed approach is valuable as a complementary noise patent filtering tool that will enable domain experts to focus more on their own knowledge-intensive tasks such as prior art analysis and research and development (R&D) strategy formulation.  相似文献   

18.
传统的微博广告过滤方法忽略了微博广告文本的数据稀疏性、语义信息和广告背景领域特征等因素的影响。针对这些问题,提出一种基于隐含狄列克雷分配(LDA)分类特征扩展的广告过滤方法。首先,将微博分为正常微博和广告型微博,并分别构建LDA主题模型预测短文本对应的主题分布,将主题中的词作为特征扩展的基础;其次,在特征扩展时结合文本类别信息提取背景领域特征,以降低其对文本分类的影响;最后,将扩展后的特征向量作为分类器的输入,根据支持向量机(SVM)的分类结果过滤广告。实验结果表明,与现有的仅基于短文本分类的过滤方法相比,其准确率平均提升4个百分点。因此,该方法能有效扩展文本特征,并降低背景领域特征的影响,更适用于数据量较大的微博广告过滤。  相似文献   

19.
为解决互信息(MI)在特征选取中的类别缺失和倾向低频词问题,提出 LDA-σ方法。该方法使用潜在狄利克雷分配模型(LDA)提取潜在主题,以“词—主题”间互信息的标准差作为特征评估函数。在Reuters-21578语料集上提取特征词并进行分类,LDA-σ方法的微平均F1最高达0.9096;宏平均F1优于其他算法,最高达0.7823。实验表明,LDA-σ方法可用于文本特征选取。  相似文献   

20.
程序员对源代码的拷贝、粘贴及修改活动会导致软件中出现大量的克隆代码,而在版本的进化过程中,克隆代码的不一致变化是引起程序错误的主要原因,同时会增加维护成本。为了解决该问题,提出一种新的研究方法:首先构建版本间克隆群的映射关系,其次借助潜在狄利克雷分配(LDA)模型提取直系克隆群集主题,最后预测克隆代码不一致变化的可能性。对一款软件的8个版本进行了实验,实验结果的区分度明显,可以有效地预测不一致变化的可能性,评估软件质量和可信性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号