首页 | 官方网站   微博 | 高级检索  
     

基于知识协同微调的低资源知识图谱补全方法
引用本文:张宁豫,谢辛,陈想,邓淑敏,叶宏彬,陈华钧. 基于知识协同微调的低资源知识图谱补全方法[J]. 软件学报, 2022, 33(10): 3531-3545
作者姓名:张宁豫  谢辛  陈想  邓淑敏  叶宏彬  陈华钧
作者单位:浙江大学, AZFT知识引擎实验室, 浙江 杭州 310028;浙大萧山科创中心, 浙江 杭州 310028
基金项目:国家自然科学基金(91846204,U19B2027)
摘    要:知识图谱补全能让知识图谱变得更加完整.现有的知识图谱补全工作大多会假设知识图谱中的实体或关系有充足的三元组实例.然而,在通用领域,存在大量长尾三元组;在垂直领域,较难获得大量高质量的标注数据.本文针对这一问题,提出了一种基于知识协同微调的低资源知识图谱补全方法.本文通过已有的结构化知识来构造初始的知识图谱补全提示,并提出一种协同微调算法来学习最优的模板、标签和模型的参数.本文的方法同时利用了知识图谱中的显式结构化知识和语言模型中的隐式事实知识,且可以同时应用于链接预测和关系抽取两种任务.实验表明,本文的方法在3个知识图谱推理数据集和5个关系抽取数据集上都取得了目前最优的性能.

关 键 词:低资源  知识图谱补全  链接预测  关系抽取  预训练语言模型
收稿时间:2021-07-20
修稿时间:2021-08-30

Knowledge Collaborative Fine-tuning for Low-resource Knowledge Graph Completion
ZHANG Ning-Yu,XIE Xin,CHEN Xiang,DENG Shu-Min,YE Hong-Bin,CHEN Hua-Jun. Knowledge Collaborative Fine-tuning for Low-resource Knowledge Graph Completion[J]. Journal of Software, 2022, 33(10): 3531-3545
Authors:ZHANG Ning-Yu  XIE Xin  CHEN Xiang  DENG Shu-Min  YE Hong-Bin  CHEN Hua-Jun
Affiliation:Zhejiang University, AZFT Joint Lab for Knowledge Engine, Hangzhou 310028, China;Hangzhou Innovation Center, Hangzhou 310028, China
Abstract:Knowledge graph completion can make the knowledge graph more complete. Unfortunately, most of existing methods on knowledge graph completion assume that the entities or relations in the knowledge graph have sufficient triple instances. However, there are a great deal of long-tail triples in general domains. Furthermore, it is challenging to obtain a large amount of high-quality annotation data in vertical domains. To address these issues, we propose a knowledge collaborative fine-tuning approach for low-resource knowledge graph completion. We leverage the structured knowledge to construct the initial prompt template and learn the optimal templates, labels and model parameters through a collaborative fine-tuning algorithm. Our method leverages the explicit structured knowledge in the knowledge graph and the implicit triple knowledge from the language model, which can be applied to the tasks of link prediction and relation extraction. Experimental results show that our approach can obtain state-of-the-art performance on three knowledge graph reasoning datasets and five relation extraction datasets.
Keywords:Low-resource  knowledge graph completion  link prediction  relation extraction  pre-trained language model
点击此处可从《软件学报》浏览原始摘要信息
点击此处可从《软件学报》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号