首页 | 官方网站   微博 | 高级检索  
     

深度学习中的对抗攻击与防御
作者姓名:刘西蒙  谢乐辉  王耀鹏  李旭如
作者单位:1. 福州大学数学与计算机科学学院,福建 福州 350108;2. 广东省数据安全与隐私保护重点实验室,广东 广州 510632;3. 华东师范大学计算机与科学学院,上海 200241
基金项目:国家自然科学基金(U1804263);国家自然科学基金(61702105);广东省数据安全与隐私保护重点实验室开放项目(2017B030301004-12);陕西省重点研发项目(2019KW-053)
摘    要:对抗样本是被添加微小扰动的原始样本,用于误导深度学习模型的输出决策,严重威胁到系统的可用性,给系统带来极大的安全隐患。为此,详细分析了当前经典的对抗攻击手段,主要包括白盒攻击和黑盒攻击。根据对抗攻击和防御的发展现状,阐述了近年来国内外的相关防御策略,包括输入预处理、提高模型鲁棒性、恶意检测。最后,给出了未来对抗攻击与防御领域的研究方向。

关 键 词:对抗样本  对抗攻击  对抗防御  深度学习安全  

Adversarial attacks and defenses in deep learning
Authors:Ximeng LIU  Lehui XIE  Yaopeng WANG  Xuru LI
Affiliation:1. College of Mathematics and Computer Science,Fuzhou University,Fuzhou 350108,China;2. Guangdong Provincial Key Laboratory of Data Security and Privacy Protection,Guangzhou 510632,China;3. School of Computer Science and Technology,East China Normal University,Shanghai 200241,China
Abstract:The adversarial example is a modified image that is added imperceptible perturbations,which can make deep neural networks decide wrongly.The adversarial examples seriously threaten the availability of the system and bring great security risks to the system.Therefore,the representative adversarial attack methods were analyzed,including white-box attacks and black-box attacks.According to the development status of adversarial attacks and defenses,the relevant domestic and foreign defense strategies in recent years were described,including pre-processing,improving model robustness,malicious detection.Finally,future research directions in the field of adversarial attacks and adversarial defenses were given.
Keywords:adversarial examples  adversarial attacks  adversarial defenses  deep learning security  
点击此处可从《》浏览原始摘要信息
点击此处可从《》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号