首页 | 官方网站   微博 | 高级检索  
     

基于平移随机变换的对抗样本生成方法
引用本文:李哲铭,张恒巍,马军强,王晋东,杨博.基于平移随机变换的对抗样本生成方法[J].计算机工程,2022,48(11):152-160+183.
作者姓名:李哲铭  张恒巍  马军强  王晋东  杨博
作者单位:1. 中国人民解放军战略支援部队信息工程大学 密码工程学院, 郑州 450001;2. 中国人民解放军陆军参谋部, 北京 100000
基金项目:国家重点研发计划“高安全等级移动终端关键技术”(2017YFB0801900)。
摘    要:基于深度神经网络的图像分类模型能够以达到甚至高于人眼的识别度识别图像,但是因模型自身结构的脆弱性,导致其容易受对抗样本的攻击。现有的对抗样本生成方法具有较高的白盒攻击率,而在黑盒条件下对抗样本的攻击成功率较低。将数据增强技术引入到对抗样本生成过程中,提出基于平移随机变换的对抗样本生成方法。通过构建概率模型对原始图像进行随机平移变换,并将变换后的图像用于生成对抗样本,有效缓解对抗样本生成过程中的过拟合现象。在此基础上,采用集成模型攻击的方式生成可迁移性更强的对抗样本,从而提高黑盒攻击成功率。在ImageNet数据集上进行单模型和集成模型攻击的实验结果表明,该方法的黑盒攻击成功率高达80.1%,与迭代快速梯度符号方法和动量迭代快速梯度符号方法相比,该方法的白盒攻击成功率虽然略有降低,但仍保持在97.8%以上。

关 键 词:深度神经网络  对抗样本  黑盒攻击  平移随机变换  迁移性
收稿时间:2021-10-28
修稿时间:2021-12-28

Adversarial Examples Generation Method Based on Random Translation Transformation
LI Zheming,ZHANG Hengwei,MA Junqiang,WANG Jindong,YANG Bo.Adversarial Examples Generation Method Based on Random Translation Transformation[J].Computer Engineering,2022,48(11):152-160+183.
Authors:LI Zheming  ZHANG Hengwei  MA Junqiang  WANG Jindong  YANG Bo
Affiliation:1. School of Cryptographic Engineering, PLA Strategic Support Force Information Engineering University, Zhengzhou 450001, China;2. Staff Department, PLA Army, Beijing 100000, China
Abstract:The image classification model based on Deep Neural Network(DNN) can recognize images with a recognition degree that is even higher than that of human eyes.However, it is vulnerable to attacks from adversarial examples because of the fragility of the model's structure.Existing methods for generating adversarial examples have high white-box attack rates, whereas the attack success rate of adversarial examples is low under the black-box condition.The data enhancement technique is introduced into the generation process of adversarial examples.This study proposes a method for generating adversarial examples, TT-MI-FGSM, based on random translation transformation.The random translation transformation of the original image is performed by establishing a probability model, and the transformed image is used to generate adversarial examples, which effectively alleviates over-fitting during the generation of adversarial examples. On this basis, model diversification is achieved by integrating model attacks to generate more transferability adversarial to improve the success rate of black-box attacks.The experiments of single and integrated model attacks on the ImageNet dataset show that the success rate of the black-box attack for the proposed method can be as high as 80.1%.Compared with the iterative fast gradient sign method and momentum iterative fast gradient sign method, it still exceeds 97.8%, although the success rate of the white-box attack for the proposed method is slightly reduced.
Keywords:Deep Neural Network(DNN)  adversarial examples  black-box attack  random translation transformation  transferability  
点击此处可从《计算机工程》浏览原始摘要信息
点击此处可从《计算机工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号