首页 | 官方网站   微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
现有的将词映射为单一向量的方法没有考虑词的多义性,从而会引发歧义问题;映射为多个向量或高斯分布的方法虽然考虑了词的多义性,但或多或少没能有效利用词序、句法结构和词间距离等信息对词在某一固定语境中语义表达的影响。综合考虑以上存在的问题,该文提出了一种基于非残差块封装的门控卷积机制加以层次注意力机制的方法,分别在所选取语境窗口中词的子语义层、合成语义层获得非对称语境窗口下目标单词的合成语义向量以预测目标单词,并按此法在给定语料上学习得到多语义词向量的计算方法。小规模语料上用该方法得到的多语义词向量,在词类比任务的语义类比上相比于基线方法准确率最高可提升1.42%;在WordSim353、MC、RG、RW等计算单词相似度任务的数据集上相比于基线方法能够达到平均2.11的性能提升,最高可到5.47。在语言建模实验上,该方法的语言模型性能相比于其他预测目标单词的方法也有显著提升。  相似文献   

2.
文本的语义表示是自然语言处理和机器学习领域的研究难点,针对目前文本表示中的语义缺失问题,基于LDA主题模型和Word2vec模型,提出一种新的文本语义增强方法Sem2vec(semantic to vector)模型。该模型利用LDA主题模型获得单词的主题分布,计算单词与其上下文词的主题相似度,作为主题语义信息融入到词向量中,代替one-hot向量输入至Sem2vec模型,在最大化对数似然目标函数约束下,训练Sem2vec模型的最优参数,最终输出增强的语义词向量表示,并进一步得到文本的语义增强表示。在不同数据集上的实验结果表明,相比其他经典模型,Sem2vec模型的语义词向量之间的语义相似度计算更为准确。另外,根据Sem2vec模型得到的文本语义向量,在多种文本分类算法上的分类结果,较其他经典模型可以提升0.58%~3.5%,同时也提升了时间性能。  相似文献   

3.
随着信息化领域的范围不断扩大,许多特定领域的文本语料开始涌现。这些特定领域,如医疗、通信等,由于受到安全性和敏感性的影响,其数据规模通常较小,传统的词嵌入学习模型难以获得有效的结果。另一方面,直接应用现有的预训练语言模型时会出现较多未登录词,这些词汇无法表示成向量,从而影响下游任务的性能表现。许多学者开始研究如何利用细粒度语义信息来得到较高质量的未登录词向量表示。然而,当前的未登录词嵌入学习模型大多针对英文语料,对中文词的细粒度语义信息只能进行简单的拼接或映射,难以在中文未登录词嵌入学习任务中得到有效的向量表示。针对上述问题,首先通过中文构字规则,即中文词所包含的汉字、汉字所包含的部件和拼音等,构建细粒度的知识图谱,使其不仅能涵盖汉字和单词之间的关联关系,还能对拼音和汉字、组件和汉字等细粒度语义信息之间的多元且复杂的关联关系进行表征。然后,在知识图谱上运行图卷积算法,从而对中文词的细粒度语义信息之间以及它们与词语义之间更深层次的关系进行建模。此外,文中通过在子图结构上构建图读出来进一步挖掘细粒度语义信息与词语义信息之间的组成关系,据此提升模型在未登录词嵌入推断中的精准度。实验结果表明,...  相似文献   

4.
针对现有的句向量学习方法不能很好的学习关系知识信息、表示复杂的语义关系,提出了基于PV-DM模型和关系信息模型的关系信息句向量模型(RISV),该模型是将PV-DM模型作为句向量训练基本模型,然后为其添加关系信息知识约束条件,使改进后模型能够学习到文本中词语之间的关系,并将关系约束模型(RCM)模型作为预训练模型,使其进一步整合语义关系约束信息,最后在文档分类和短文本语义相似度两个任务中验证了RISV模型的有效性。实验结果表明,采用RISV模型学习的句向量能够更好地表示文本。  相似文献   

5.
现有的维吾尔文命名实体识别主要采用基于条件随机场的统计学习方法,但依赖于人工提取的特征工程和领域知识。针对该问题,该文提出了一种基于深度神经网络的学习方法,并引入不同的特征向量表示。首先利用大规模未标注语料训练的词向量模型获取每个单词具有语义信息的词向量;其次,利用Bi-LSTM提取单词的字符级向量;然后,利用直接串联法或注意力机制处理词向量和字符级向量,进一步获取联合向量表示;最后,用Bi-LSTM-CRF深度神经网络模型进行命名实体标注。实验结果表明,以基于注意力机制的联合向量表示作为输入的Bi-LSTM-CRF方法在维吾尔文命名实体识别上F值达到90.13%。  相似文献   

6.
基于多通道卷积神经网的实体关系抽取   总被引:1,自引:0,他引:1  
针对实体关系抽取任务中,传统基于统计学习的方法构建特征费时费力、现有深度学习方法依赖单一词向量的表征能力的问题,提出多通道卷积神经网模型。首先使用不同的词向量将输入语句进行映射,作为模型不同通道的输入;然后使用卷积神经网自动提取特征;最后通过softmax分类器输出关系类型,完成关系抽取任务。和其他模型相比,该模型可以获取输入语句更丰富的语义信息,自动学习出更具有区分度的特征。在SemEval-2010 Task 8 数据集上的实验结果表明提出的多通道卷积神经网模型较使用单一词向量的模型更适合处理关系抽取任务。  相似文献   

7.
针对单一词向量中存在的一词多义和一义多词的问题,以柬语为例提出了一种基于HDP主题模型的主题词向量的构造方法。在单一词向量基础上融入了主题信息,首先通过HDP主题模型得到单词主题标签,然后将其视为伪单词与单词一起输入Skip-Gram模型,同时训练出主题向量和词向量,最后将文本主题信息的主题向量与单词训练后得到的词向量进行级联,获得文本中每个词的主题词向量。与未融入主题信息的词向量模型相比,该方法在单词相似度和文本分类方面均取得了更好的效果,获取的主题词向量具有更多的语义信息。  相似文献   

8.
针对现有的基于图的关键词提取方法未能有效整合文本序列中词与词之间的潜在语义关系的问题,提出了一个融合词向量与位置信息的基于图的关键词提取算法EPRank。通过词向量表示模型学得目标文档中每个词的表示向量;将该反映词与词之间的潜在语义关系的词向量与位置特征相结合融合到PageRank评分模型中;选择几个排名靠前的单词或短语作为目标文档的关键词。实验结果表明,提出的EPRank方法在KDD和SIGIR两个数据集上的各项评估指标均高于5个现有的关键词提取方法。  相似文献   

9.
当前的电子病历实体关系抽取方法存在两个问题: 忽视了位置向量噪声和语义表示匮乏。该文提出一种基于位置降噪和丰富语义的实体关系抽取模型。模型首先利用位置信息和专业领域语料训练的词向量信息获取每个词的注意力权重,然后将此权重与通用领域语料训练的词向量结合,实现位置向量降噪和丰富语义引入,最后根据加权后的词向量判断实体关系类型。该方法在2010年i2B2/VA语料上进行实验评估,F1值为76.47%,取得了基于该语料的最优结果。  相似文献   

10.
语义关系识别是对文档进行处理识别出包含的语义关系的过程,是构建本体重要组成部分之一.在石油领域本体的构建过程中,由于石油领域的文档具有组合词多的特点,语义关系识别更加困难.目前使用的语义识别算法主要是基于关联规则的识别算法,但此类算法没有领域针对性.通过分析石油文档的特点,提出一种基于改进词向量的石油文档语义关系识别算法,以连续词袋(Continuous Bag-Of-Words,CBOW)模型为基础,对石油专业术语进行扩展训练,引入负采样和二次采样技术提高训练准确率和效率,利用向量特征训练支持向量机(Support Vector Mechine,SVM)分类器进行语义关系识别.实验结果表明,该方法训练的词向量能够准确识别石油领域的语义关系,在石油领域具有明显的优势.  相似文献   

11.
Cross-domain word representation aims to learn high-quality semantic representations in an under-resourced domain by leveraging information in a resourceful domain. However, most existing methods mainly transfer the semantics of common words across domains, ignoring the semantic relations among domain-specific words. In this paper, we propose a domain structure-based transfer learning method to learn cross-domain representations by leveraging the relations among domain-specific words. To accomplish this, we first construct a semantic graph to capture the latent domain structure using domain-specific co-occurrence information. Then, in the domain adaptation process, beyond domain alignment, we employ Laplacian Eigenmaps to ensure the domain structure is consistently distributed in the learned embedding space. As such, the learned cross-domain word representations not only capture shared semantics across domains, but also maintain the latent domain structure. We performed extensive experiments on two tasks, namely sentiment analysis and query expansion. The experiment results show the effectiveness of our method for tasks in under-resourced domains.  相似文献   

12.
词语作为语言模型中的基本语义单元,在整个语义空间中与其上下文词语具有很强的关联性。同样,在语言模型中,通过上下文词可判断出当前词的含义。词表示学习是通过一类浅层的神经网络模型将词语和上下文词之间的关联关系映射到低维度的向量空间中。然而,现有的词表示学习方法往往仅考虑了词语与上下文词之间的结构关联,词语本身所蕴含的内在语义信息却被忽略。因此,该文提出了DEWE词表示学习算法,该算法可在词表示学习的过程中不仅考量词语与上下文之间的结构关联,同时也将词语本身的语义信息融入词表示学习模型,使得训练得到的词表示既有结构共性也有语义共性。实验结果表明,DEWE算法是一种切实可行的词表示学习方法,相较于该文使用的对比算法,DEWE在6类相似度评测数据集上具有优异的词表示学习性能。  相似文献   

13.
The existing seq2seq model often suffers from semantic irrelevance when generating summaries, and does not consider the role of keywords in summary generation. Aiming at this problem, this paper proposes a Chinese news text abstractive summarization method with keywords fusion. Firstly, the source text words are input into the Bi-LSTM model in order. The obtained hidden state is input to the sliding convolutional neural network, so local features between each word and adjacent words are extracted. Secondly, keyword information and gating unit are used to filter news text information, so as to remove redundant information. Thirdly, the global feature information of each word is obtained through the self-attention mechanism, and the hierarchical combination of local and global word features representation is obtained after encoding. Finally, the encoded word feature representation is input into the LSTM model with the attention mechanism to decode the summary information. The method models the n-gram features of news words through a sliding convolutional network. Based on this, the self-attention mechanism is used to obtain hierarchical local and global word feature representations. At the same time, the important role of keywords in abstractive summary is considered, and the gating unit is used to remove redundant information to obtain more accurate news text information. Experiments on Sogou's news corpus show that this method can effectively improve the quality of summary generation, and effectively enhance the values of ROUGE-1、ROUGE-2、ROUGE-L.  相似文献   

14.

Sense representations have gone beyond word representations like Word2Vec, GloVe and FastText and achieved innovative performance on a wide range of natural language processing tasks. Although very useful in many applications, the traditional approaches for generating word embeddings have a strict drawback: they produce a single vector representation for a given word ignoring the fact that ambiguous words can assume different meanings. In this paper, we explore unsupervised sense representations which, different from traditional word embeddings, are able to induce different senses of a word by analyzing its contextual semantics in a text. The unsupervised sense representations investigated in this paper are: sense embeddings and deep neural language models. We present the first experiments carried out for generating sense embeddings for Portuguese. Our experiments show that the sense embedding model (Sense2vec) outperformed traditional word embeddings in syntactic and semantic analogies task, proving that the language resource generated here can improve the performance of NLP tasks in Portuguese. We also evaluated the performance of pre-trained deep neural language models (ELMo and BERT) in two transfer learning approaches: feature based and fine-tuning, in the semantic textual similarity task. Our experiments indicate that the fine tuned Multilingual and Portuguese BERT language models were able to achieve better accuracy than the ELMo model and baselines.

  相似文献   

15.
Inferring query intent is significant in information retrieval tasks. Query subtopic mining aims to find possible subtopics for a given query to represent potential intents. Subtopic mining is challenging due to the nature of short queries. Learning distributed representations or sequences of words has been developed recently and quickly, making great impacts on many fields. It is still not clear whether distributed representations are effective in alleviating the challenges of query subtopic mining. In this paper, we exploit and compare the main semantic composition of distributed representations for query subtopic mining. Specifically, we focus on two types of distributed representations: paragraph vector which represents word sequences with an arbitrary length directly, and word vector composition. We thoroughly investigate the impacts of semantic composition strategies and the types of data for learning distributed representations. Experiments were conducted on a public dataset offered by the National Institute of Informatics Testbeds and Community for Information Access Research. The empirical results show that distributed semantic representations can achieve outstanding performance for query subtopic mining, compared with traditional semantic representations. More insights are reported as well.  相似文献   

16.
Heterogeneous networks, such as bibliographical networks and online business networks, are ubiquitous in everyday life. Nevertheless, analyzing them for high-level semantic understanding still poses a great challenge for modern information systems. In this paper, we propose HiWalk to learn distributed vector representations of the nodes in heterogeneous networks. HiWalk is inspired by the state-of-the-art representation learning algorithms employed in the context of both homogeneous networks and heterogeneous networks, based on word embedding learning models. Different from existing methods in the literature, the purpose of HiWalk is to learn vector representations of the targeted set of nodes by leveraging the other nodes as “background knowledge”, which maximizes the structural correlations of contiguous nodes. HiWalk decomposes the adjacent probabilities of the nodes and adopts a hierarchical random walk strategy, which makes it more effective, efficient and concentrated when applied to practical large-scale heterogeneous networks. HiWalk can be widely applied in heterogeneous networks environments to analyze targeted types of nodes. We further validate the effectiveness of the proposed HiWalk through multiple tasks conducted on two real-world datasets.  相似文献   

17.
A sememe is defined as the minimum semantic unit of languages in linguistics. Sememe knowledge bases are built by manually annotating sememes for words and phrases. HowNet is the most well-known sememe knowledge base. It has been extensively utilized in many natural language processing tasks in the era of statistical natural language processing and proven to be effective and helpful to understanding and using languages. In the era of deep learning, although data are thought to be of vital importance, there are some studies working on incorporating sememe knowledge bases like HowNet into neural network models to enhance system performance. Some successful attempts have been made in the tasks including word representation learning, language modeling, semantic composition, etc. In addition, considering the high cost of manual annotation and update for sememe knowledge bases, some work has tried to use machine learning methods to automatically predict sememes for words and phrases to expand sememe knowledge bases. Besides, some studies try to extend HowNet to other languages by automatically predicting sememes for words and phrases in a new language. In this paper, we summarize recent studies on application and expansion of sememe knowledge bases and point out some future directions of research on sememes.  相似文献   

18.
以Word2Vec为代表的静态蒙古文词向量学习方法,将处于不同语境的多种语义词汇综合表示成一个词向量,这种上下文无关的文本表示方法对后续任务的提升非常有限。通过二次训练多语言BERT预训练模型与CRF相结合,并采用两种子词融合方式,提出一种新的蒙古文动态词向量学习方法。为验证方法的有效性,在内蒙古师范大学蒙古文硕博论文的教育领域、文学领域数据集上用不同的模型进行了同义词对比实验,并利用K-means聚类算法对蒙古文词语进行聚类分析,最后在嵌入式主题词挖掘任务中进行了验证。实验结果表明,BERT学出的词向量质量高于Word2Vec,相近词的向量在向量空间中的距离非常近,不相近词的向量较远,在主题词挖掘任务中获取的主题词有密切的关联。  相似文献   

19.
常见的词嵌入向量模型存在每个词只具有一个词向量的问题,词的主题值是重要的多义性条件,可以作为获得多原型词向量的附加信息。在skip-gram(cbow)模型和文本主题结构基础上,该文研究了两种改进的多原型词向量方法和基于词与主题的嵌入向量表示的文本生成结构。该模型通过联合训练,能同时获得文本主题、词和主题的嵌入向量,实现了使用词的主题信息获得多原型词向量,和使用词和主题的嵌入式向量学习文本主题。实验表明,该文提出的方法不仅能够获得具有上下文语义的多原型词向量,也可以获得关联性更强的文本主题。  相似文献   

20.
情感词是情感分析中的基础单元,因此情感词典在情感分析中起着决定性的作用,目前构建情感词典的方法只是用到了单词的语义信息和构词信息,忽略了其所在语境.基于此,对于一些语义未知的词,传统语义方法难以得出其情感权重,而对于一些由于语境变化而产生新用法的词,使用语义方法很难计算出其真实权重.针对这种情况,首先提出了从构字到篇章...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号