首页 | 官方网站   微博 | 高级检索  
     

基于代理模型的XAI可解释性量化评估方法
引用本文:李瑶,王春露,左兴权,黄海,丁忆宁,张修建.基于代理模型的XAI可解释性量化评估方法[J].控制与决策,2024,39(2):680-688.
作者姓名:李瑶  王春露  左兴权  黄海  丁忆宁  张修建
作者单位:北京邮电大学 网络空间安全学院,北京 100876;可信分布式计算与服务教育部重点实验室,北京 100876;北京邮电大学 计算机学院,北京 100876;可信分布式计算与服务教育部重点实验室,北京 100876;北京航天计量测试技术研究所,北京 100076;国家市场监管重点实验室人工智能计量测试与标准,北京 100076
摘    要:可解释人工智能(explainable artificial intelligence, XAI)近年来发展迅速,已出现多种人工智能模型的解释技术,但是目前缺乏XAI可解释性的定量评估方法.已有评估方法大多需借助用户实验进行评估,这种方法耗时长且成本高昂.针对基于代理模型的XAI,提出一种可解释性量化评估方法.首先,针对这类XAI设计一些指标并给出计算方法,构建包含10个指标的评估指标体系,从一致性、用户理解性、因果性、有效性、稳定性5个维度来评估XAI的可解释性;然后,对于包含多个指标的维度,将熵权法与TOPSIS相结合,建立综合评估模型来评估该维度上的可解释性;最后,将该评估方法用于评估6个基于规则代理模型的XAI的可解释性.实验结果表明,所提出方法能够展现XAI在不同维度上的可解释性水平,用户可根据需求选取合适的XAI.

关 键 词:可解释人工智能  可解释性评估  评估模型  代理模型  规则模型  定量评估

Quantitative evaluation method for interpretability of XAI based on surrogate model
LI Yao,WANG Chun-lu,ZUO Xing-quan,HUANG Hai,DING Yi-ning,ZHANG Xiu-jian.Quantitative evaluation method for interpretability of XAI based on surrogate model[J].Control and Decision,2024,39(2):680-688.
Authors:LI Yao  WANG Chun-lu  ZUO Xing-quan  HUANG Hai  DING Yi-ning  ZHANG Xiu-jian
Affiliation:School of Cyberspace Security,Beijing University of Posts and Telecommunications,Beijing 100876,China;Key Laboratory of Trustworthy Distributed Computing and Service of Ministry of Education,Beijing 100876,China;School of Computer Science,Beijing University of Posts and Telecommunications,Beijing 100876,China;Key Laboratory of Trustworthy Distributed Computing and Service of Ministry of Education,Beijing 100876,China; Beijing Aerospace Institute for Metrology and Measurement Technology,Beijing 100076,China;Key Laboratory of Artificial Intelligence Measurement and Standards for State Market Regulation,Beijing 100076,China
Abstract:Explainable artificial intelligence(XAI) is growing rapidly in recent years and many interpretability techniques have emerged, but there is a lack of quantitative evaluation approaches for XAI''s interpretability. Most of existing evaluation methods rely on users'' experiments, which is time-consuming and costly. Aiming at the surrogate model-based XAI, we propose a quantitative evaluation approach for the XAI''s interpretability. Firstly, we devise some indices for this kind of XAI and give their computational method, and construct an index system with 10 quantitative indices to evaluate the XAI''s interpretability from five dimensions, namely consistency, user comprehension, causality, effectiveness and stability. For the dimension with multiple indices, a comprehensive evaluation model is established by combining the entropy weight method with TOPSIS to evaluate the XAI''s interpretability in the dimension. The proposed approach is applied to the evaluation of the interpretability of 6 XAIs based on the rule surrogate model. Experimental results show that the approach can demonstrate the XAI''s interpretability in different dimensions, and users can choose suitable XAI according to their needs.
Keywords:
点击此处可从《控制与决策》浏览原始摘要信息
点击此处可从《控制与决策》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号