首页 | 官方网站   微博 | 高级检索  
     

可解释的知识图谱推理方法综述
引用本文:夏毅,兰明敬,陈晓慧,罗军勇,周刚,何鹏. 可解释的知识图谱推理方法综述[J]. 网络与信息安全学报, 2022, 8(5): 1-25. DOI: 10.11959/j.issn.2096-109x.2022063
作者姓名:夏毅  兰明敬  陈晓慧  罗军勇  周刚  何鹏
作者单位:信息工程大学,河南 郑州 450001
基金项目:国家自然科学基金(41801313);河南省科技攻关计划(222102210081);河南省科技攻关计划(222300420590)
摘    要:近年来,以深度学习模型为基础的人工智能研究不断取得突破性进展,但其大多具有黑盒性,不利于人类认知推理过程,导致高性能的复杂算法、模型及系统普遍缺乏决策的透明度和可解释性。在国防、医疗、网络与信息安全等对可解释性要求严格的关键领域,推理方法的不可解释性对推理结果及相关回溯造成较大影响,因此,需要将可解释性融入这些算法和系统中,通过显式的可解释知识推理辅助相关预测任务,形成一个可靠的行为解释机制。知识图谱作为最新的知识表达方式之一,通过对语义网络进行建模,以结构化的形式描述客观世界中实体及关系,被广泛应用于知识推理。基于知识图谱的知识推理在离散符号表示的基础上,通过推理路径、逻辑规则等辅助手段,对推理过程进行解释,为实现可解释人工智能提供重要途径。针对可解释知识图谱推理这一领域进行了全面的综述。阐述了可解释人工智能和知识推理相关概念。详细介绍近年来可解释知识图谱推理方法的最新研究进展,从人工智能的3个研究范式角度出发,总结了不同的知识图谱推理方法。提出对可解释的知识图谱推理研究前景和未来研究方向。

关 键 词:知识推理  知识图谱  可解释人工智能  信息安全

Survey on explainable knowledge graph reasoning methods
Yi XIA,Mingjng LAN,Xiaohui CHEN,Junyong LUO,Gang ZHOU,Peng HE. Survey on explainable knowledge graph reasoning methods[J]. Chinese Journal of Network and Information Security, 2022, 8(5): 1-25. DOI: 10.11959/j.issn.2096-109x.2022063
Authors:Yi XIA  Mingjng LAN  Xiaohui CHEN  Junyong LUO  Gang ZHOU  Peng HE
Affiliation:Information Engineering University, Zhengzhou 450001, China
Abstract:In recent years, deep learning models have achieved remarkable progress in the prediction and classification tasks of artificial intelligence systems.However, most of the current deep learning models are black box, which means it is not conducive to human cognitive reasoning process.Meanwhile, with the continuous breakthroughs of artificial intelligence in the researches and applications, high-performance complex algorithms, models and systems generally lack the transparency and interpretability of decision making.This makes it difficult to apply the technologies in a wide range of fields requiring strict interpretability, such as national defense, medical care and cyber security.Therefore, the interpretability of artificial intelligence should be integrated into these algorithms and systems in the process of knowledge reasoning.By means of carrying out explicit explainable intelligence reasoning based on discrete symbolic representation and combining technologies in different fields, a behavior explanation mechanism can be formed which is an important way for artificial intelligence to realize data perception to intelligence perception.A comprehensive review of explainable knowledge graph reasoning was given.The concepts of explainable artificial intelligence and knowledge reasoning were introduced briefly.The latest research progress of explainable knowledge graph reasoning methods based on the three paradigms of artificial intelligence was introduced.Specifically, the ideas and improvement process of the algorithms in different scenarios of explainable knowledge graph reasoning were explained in detail.Moreover, the future research direction and the prospect of explainable knowledge graph reasoning were discussed.
Keywords:knowledge reasoning  knowledge graph  explainable artificial intelligence  information security  
本文献已被 维普 等数据库收录!
点击此处可从《网络与信息安全学报》浏览原始摘要信息
点击此处可从《网络与信息安全学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号