首页 | 官方网站   微博 | 高级检索  
     

可解释的层次注意力机制网络危重症预警
引用本文:王天罡,张晓滨,马红叶,蔡宏伟. 可解释的层次注意力机制网络危重症预警[J]. 计算机工程与应用, 2021, 57(5): 131-138. DOI: 10.3778/j.issn.1002-8331.1911-0175
作者姓名:王天罡  张晓滨  马红叶  蔡宏伟
作者单位:1.西安工程大学 计算机科学学院,西安 7100482.西安交通大学第一附属医院 网络信息部,西安 7100613.西安交通大学第一附属医院 重症医学科,西安 710061
基金项目:西安交通大学第一附属医院院基金软科学项目;陕西省社会发展科技公关项目
摘    要:准确性和可解释性是决定预测模型是否能够成功应用的两个主要因素.Logistic回归等统计分析模型尽管预测精度不高,但因其易于表达而被广泛采用.与之相对的基于循环神经网络(RNN)或卷积神经网络(CNN)等深度学习"黑盒模型",准确率较高却通常难以理解.在医疗领域上述因素的权衡是目前相关研究面临的巨大挑战,通过对某三甲医...

关 键 词:层次注意力机制  循环神经网络  疾病预警  可解释性

Early Warning of Critical Illness Based on Explicable Hierarchical Attention Mechanism
WANG Tiangang,ZHANG Xiaobin,MA Hongye,CAI Hongwei. Early Warning of Critical Illness Based on Explicable Hierarchical Attention Mechanism[J]. Computer Engineering and Applications, 2021, 57(5): 131-138. DOI: 10.3778/j.issn.1002-8331.1911-0175
Authors:WANG Tiangang  ZHANG Xiaobin  MA Hongye  CAI Hongwei
Affiliation:1.College of Computer Science, Xi’an Polytechnic University, Xi’an 710048, China2.Department of Network Information, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China3.Department of Critical Care Medicine, The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, China
Abstract:Accuracy and interpretability are two factors that determine whether the prediction model can be applied successfully. Traditional statistical analysis models, such as Logistic regression, are widely used because it comprehends easily, although the prediction isn’t accurate. In contrast, the deep learning “black box model” based on RNN or CNN has high accuracy but is often difficult to understand. The balancing of these factors in the medical field is a great challenge for current research. This paper intends to establish an Interpretable Hierarchical Attention Network(IHAN) based on output optimization to give early warning of the possible severe and critical disease of patients in rescue process through the experimental analysis of CIS system data collected in a hospital. IHAN is superior to other neural network models in terms of experimental accuracy, and can imitate human behavior. It focuses on abnormalities in the two dimensions of time and risk factors in patients’ physiological data, and achieves a better interpretability while maintaining a higher accuracy.
Keywords:hierarchical attention mechanism  Recurrent Neural Network(RNN)  early warning of disease  interpretability  
本文献已被 万方数据 等数据库收录!
点击此处可从《计算机工程与应用》浏览原始摘要信息
点击此处可从《计算机工程与应用》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号