2.845

2023影响因子

(CJCR)

  • 中文核心
  • EI
  • 中国科技核心
  • Scopus
  • CSCD
  • 英国科学文摘

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

车牌识别系统的黑盒对抗攻击

陈晋音 沈诗婧 苏蒙蒙 郑海斌 熊晖

陈晋音, 沈诗婧, 苏蒙蒙, 郑海斌, 熊晖. 车牌识别系统的黑盒对抗攻击. 自动化学报, 2021, 47(1): 121−135 doi: 10.16383/j.aas.c190488
引用本文: 陈晋音, 沈诗婧, 苏蒙蒙, 郑海斌, 熊晖. 车牌识别系统的黑盒对抗攻击. 自动化学报, 2021, 47(1): 121−135 doi: 10.16383/j.aas.c190488
Chen Jin-Yin, Shen Shi-Jing, Su Meng-Meng, Zheng Hai-Bin, Xiong Hui. Black-box adversarial attack on license plate recognition system. Acta Automatica Sinica, 2021, 47(1): 121−135 doi: 10.16383/j.aas.c190488
Citation: Chen Jin-Yin, Shen Shi-Jing, Su Meng-Meng, Zheng Hai-Bin, Xiong Hui. Black-box adversarial attack on license plate recognition system. Acta Automatica Sinica, 2021, 47(1): 121−135 doi: 10.16383/j.aas.c190488

车牌识别系统的黑盒对抗攻击

doi: 10.16383/j.aas.c190488
基金项目: 国家自然科学基金(62072406), 浙江省自然科学基金(LY19F020025), 宁波市“科技创新2025”重大专项(2018B10063)资助
详细信息
    作者简介:

    陈晋音:浙江工业大学信息工程学院副教授. 分别于2004年, 2009年获得浙江工业大学学士, 博士学位. 2005年和2006年, 在日本足利工业大学学习进化计算. 主要研究方向为进化计算, 数据挖掘和深度学习算法. 本文通信作者.E-mail: chenjinyin@zjut.edu.cn

    沈诗婧:浙江工业大学信息工程学院硕士研究生. 主要研究方向为深度学习, 计算机视觉.E-mail: 201407760128@zjut.edu.cn

    苏蒙蒙:浙江工业大学信息工程学院硕士研究生. 2017年获得浙江工业大学学士学位. 主要研究方向为智能计算, 人工免疫和工业安全.E-mail: sumengmeng1994@163.com

    郑海斌:浙江工业大学信息工程学院硕士研究生. 2017年获得浙江工业大学学士学位. 主要研究方向为数据挖掘与应用, 生物信息学.E-mail: haibinzheng320@gmail.com

    熊晖:浙江工业大学信息工程学院硕士研究生. 主要研究方向为图像处理, 人工智能.E-mail: bearlight080329@gmail.com

Black-box Adversarial Attack on License Plate Recognition System

Funds: Supported by National Natural Science Foundation of China (62072406), the Natural Science Foundation of Zhejiang Province (LY19F020025), the Major Special Funding for “Science and Technology Innovation 2025” of Ningbo (2018B10063)
  • 摘要:

    深度神经网络(Deep neural network, DNN)作为最常用的深度学习方法之一, 广泛应用于各个领域. 然而, DNN容易受到对抗攻击的威胁, 因此通过对抗攻击来检测应用系统中DNN的漏洞至关重要. 针对车牌识别系统进行漏洞检测, 在完全未知模型内部结构信息的前提下展开黑盒攻击, 发现商用车牌识别系统存在安全漏洞. 提出基于精英策略的非支配排序遗传算法(NSGA-II)的车牌识别黑盒攻击方法, 仅获得输出类标及对应置信度, 即可产生对环境变化较为鲁棒的对抗样本, 而且该算法将扰动控制为纯黑色块, 可用淤泥块代替, 具有较强的迷惑性. 为验证本方法在真实场景的攻击可复现性, 分别在实验室和真实环境中对车牌识别系统展开攻击, 并且将对抗样本用于开源的商业软件中进行测试, 验证了攻击的迁移性.

  • 图  1  车牌识别系统的黑盒攻击方法整体框图

    Fig.  1  The main block diagram of the proposed method against license plate recognition system

    图  2  模拟场景变换效果图

    Fig.  2  Scene simulation change effect diagram

    图  3  车牌对抗样本

    Fig.  3  License plate adversarial examples

    图  4  车牌样本第10次迭代非支配排序结果

    Fig.  4  License plate example in 10th iteration non-dominated sorting result

    图  5  随机交叉过程示意图

    Fig.  5  Schematic diagram of the random cross process

    图  6  车牌样本随机交叉示例

    Fig.  6  Example of random intersection of license plate examples

    图  7  数字“8”攻击结果的帕累托曲线

    Fig.  7  Pareto curve of the number “8” attack result

    图  8  不同攻击算法攻击同一张车牌样本的不同位置的对抗样本图

    Fig.  8  Adversarial examples graph of different attack algorithms attacking different positions of the same license plate sample

    图  9  平均原类标置信度随迭代次数变化曲线图

    Fig.  9  Curve of the original class standard confidence varying with the number of iterations

    图  10  平均扰动大小随迭代次数变化曲线图

    Fig.  10  Curve of the perturbation varying with the number of iterations

    表  1  三种模型的识别准确率

    Table  1  Recognition accuracy of the three models

    模型名称训练准确率测试准确率
    HyperLPR96.7%96.3%
    EasyPR85.4%84.1%
    PaddlePaddle - OCR车牌识别87.9%86.5%
    下载: 导出CSV

    表  2  车牌图像攻击算法对比结果

    Table  2  Comparison results of plate images attack algorithms

    攻击算法攻击成功率${\tilde L_2}$${\tilde L_0}$访问次数
    白盒FGSM89.3%0.0670.93732
    2-norm92.8%0.0510.9233
    黑盒ZOO85.7%0.0870.95374356
    AutoZOO87.1%0.0690.9384256
    本文算法98.6%0.0350.0041743
    下载: 导出CSV

    表  3  不同环境模拟策略在不同模拟环境下的攻击成功率

    Table  3  The attack success rate of different simulation strategies in different simulation environments

    环境因素攻击成功率 (%)平均成功
    率 (%)
    0123456789
    原始对抗样本 固定1 100 96 100 100 100 100 100 100 100 100 99.6
    固定2 100 94 96 98 94 100 100 96 100 100 98.0
    随机变换 100 94 98 100 94 100 100 98 100 100 98.4
    尺寸 (× 0.5) 固定1 100 80 90 92 90 94 98 90 96 100 93.8
    光线 (+ 30) 固定2 98 76 90 84 88 92 94 86 92 98 90.4
    角度 (右 30 度) 随机变换 100 76 92 92 90 94 96 88 92 98 92.8
    尺寸 (× 2) 固定1 100 80 90 92 92 94 98 90 96 100 93.6
    光线 (– 30) 固定2 100 78 90 86 86 88 92 82 90 96 89.2
    角度 (左 30 度) 随机变换 100 78 92 90 88 90 96 84 92 96 91.4
    尺寸 (× 0.3) 固定1 92 76 80 86 82 84 88 84 90 88 85.0
    光线 (+ 50) 固定2 98 82 92 92 90 94 96 90 96 98 93.4
    角度 (右 50 度) 随机变换 96 80 90 88 86 90 94 92 92 94 90.8
    尺寸 (× 3) 固定1 90 74 80 86 82 82 90 82 88 88 84.2
    光线 (– 50) 固定2 98 80 90 92 90 92 96 92 94 98 92.8
    角度 (左 50 度) 随机变换 96 78 88 88 84 92 92 92 94 94 90.6
    尺寸 (× 0.7) 固定1 92 76 80 88 84 82 90 82 92 90 85.6
    光线 (+ 20) 固定2 94 76 86 90 86 84 92 86 90 92 88.4
    角度 (右 42 度) 随机变换 96 78 90 92 90 92 94 90 94 96 92.2
    尺寸 (× 1.3) 固定1 92 76 78 86 82 84 90 82 88 84 84.2
    光线 (– 75) 固定2 92 74 82 86 82 86 92 88 90 90 88.0
    角度 (左 15 度) 随机变换 94 76 88 90 86 92 92 90 92 94 91.0
    各种环境平均攻击成功率 固定1 95.1 79.7 85.4 90.0 87.4 88.6 93.4 87.1 92.9 92.9 89.3
    固定2 97.1 79.4 89.4 89.4 88.0 90.9 94.6 88.3 93.1 96.0 90.8
    随机变换 97.4 79.7 90.6 90.6 88.3 92.6 94.9 90.0 93.7 96.0 91.6
    下载: 导出CSV

    表  4  车牌对抗样本识别结果及其置信度、扰动等展示

    Table  4  License plate against sample identification results and their confidence, disturbance display

    环境因素识别结果 (固定1/固定2/随机变换)
    原始对抗样本C/C/QH/5/5Z/Z/Z5/5/2J/6/Z3/3/33/5/5T/T/ZG/S/S2/2/2
    平均置信度: 0.92/0.87/0.86
    尺寸 (× 0.5)
    光线 (+ 30)
    角度 (右 30 度)
    C/C/QH/5/5Z/Z/35/5/2J/X/Z3/3/33/3/3T/1/ZG/S/S2/2/2
    平均置信度: 0.90/0.86/0.83
    尺寸 (× 2)
    光线 (– 30)
    角度 (左 30 度)
    C/C/QH/7/5Z/Z/Z5/5/2J/6/Z3/3/33/5/5T/T/ZG/S/G2/2/2
    平均置信度: 0.89/0.83/0.86
    尺寸 (× 0.3)
    光线 (+ 50)
    角度 (右 50 度)
    C/C/Q1/5/12/Z/X5/5/54/6/X3/3/33/5/5T/T/ZG/S/S2/2/2
    平均置信度: 0.84/0.90/0.85
    尺寸 (× 3)
    光线 (– 50)
    角度 (左 50 度)
    C/C/Q1/5/7Z/Z/Z5/5/2J/6/43/3/33/5/51/T/1G/S/S2/2/2
    平均置信度: 0.84/0.88/0.86
    尺寸 (× 0.7)
    光线 (+ 20)
    角度 (右 42 度)
    C/C/QH/1/5Z/Z/Z5/5/2J/6/Z3/3/35/5/5T/T/ZG/S/S2/2/2
    平均置信度: 0.81/0.87/0.85
    尺寸 (× 1.3)
    光线 (– 75)
    角度 (左 15 度)
    C/C/01/1/7Z/2/Z5/5/54/6/Z3/3/33/5/57/T/ZS/G/S2/2/2
    平均置信度: 0.87/0.82/0.83
    下载: 导出CSV

    表  5  实验室环境的车牌对抗攻击

    Table  5  License plate adversarial attack in the laboratory environment

    环境因素0度, 1 m, 白天0度, 1 m, 夜晚0度, 5 m, 白天0度, 5 m, 夜晚20度, 1 m, 白天20度, 1 m, 夜晚
    物理对抗样本
    正常车牌识别结果苏 AN4D79苏 AN4D79苏 AN4D79苏 AN4D79苏 AN4D79苏 AN4D79
    正常车牌识别置信度0.97510.97410.92420.92140.95780.9501
    对抗样本识别结果苏 AH4072苏 AH4072苏 AH4072苏 AH4072苏 AH4072苏 AH4072
    对抗样本识别置信度0.90410.88620.82480.83100.80450.8424
    下载: 导出CSV

    表  6  初始扰动信息的影响

    Table  6  Influences of initial perturbation information

    面积比值数量形状攻击成功率最终扰动迭代次数
    1: 5010R100%0.006233
    C100%0.005932
    R+C100%0.006335
    30R100%0.005436
    C100%0.005235
    R+C100%0.005434
    50R100%0.004242
    C100%0.004340
    R+C100%0.004344
    1: 8010R100%0.005834
    C100%0.005433
    R+C100%0.005534
    30R100%0.004334
    C100%0.004132
    R+C100%0.004235
    50R96%0.003748
    C94%0.003648
    R+C96%0.003246
    1: 12010R100%0.004232
    C100%0.004531
    R+C98%0.004231
    30R98%0.003536
    C96%0.003336
    R+C96%0.003335
    50R87%0.002756
    C86%0.002558
    R+C87%0.002458
    下载: 导出CSV

    表  7  交叉概率敏感性分析

    Table  7  Cross-probability sensitivity analysis

    交叉概率迭代次数原类标置信度扰动大小$({\tilde L_0})$
    0.2750.1530.0048
    0.4530.1380.0046
    0.6420.1130.0048
    0.8340.1260.0043
    1320.1400.0045
    下载: 导出CSV

    表  8  躲避公路探头抓拍

    Table  8  Avoiding road probe capture

    HyperLPR云 AG7C35HyperLPR新 AG7C65HyperLPR浙 AG7C65
    百度AI浙 AC7C35百度AI浙 AG7C35百度AI浙 AG7C35
    OpenALPR浙 A67C65OpenALPR浙 A67C55OpenALPR浙 A07C35
    下载: 导出CSV

    表  10  冒充出入库车辆

    Table  10  Posing as a warehousing vehicle

    立方浙 AP0F20立方浙 AP0F20
    百度AI浙 AF0F20百度AI浙 AT0F20
    OpenALPR浙 A10F20OpenALPR浙 A10F20
    下载: 导出CSV

    表  9  躲避车牌尾号限行

    Table  9  Avoiding license plate tail number limit

    HyperLPR苏 A14D72HyperLPR苏 AH4D72HyperLPR苏 AH4D72
    百度AI苏 AH4D72百度AI苏 AH4D79百度AI苏 AH4D72
    OpenALPR苏 AM4D78OpenALPR苏 AM4D79OpenALPR苏 AM4D72
    下载: 导出CSV
  • [1] Goodfellow I J, Bengio Y, Courville A. Deep Learning. Cambridge: MIT Press, 2016. 24−45
    [2] Chen J Y, Zheng H B, Lin X, Wu Y Y, Su M M. A novel image segmentation method based on fast density clustering algorithm. Engineering Applications of Artificial Intelligence, 2018, 73: 92−110 doi: 10.1016/j.engappai.2018.04.023
    [3] Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Quebec, Canada: MIT Press, 2014. 3104−3112
    [4] 代伟, 柴天佑. 数据驱动的复杂磨矿过程运行优化控制方法. 自动化学报, 2014, 40(9): 2005−2014

    Dai Wei, Chai Tian-You. Data-driven optimal operational control of complex grinding processes. Acta Automatica Sinica, 2014, 40(9): 2005−2014
    [5] Chen J Y, Zheng H B, Xiong H, Wu Y Y, Lin X, Ying S Y, et al. DGEPN-GCEN2V: A new framework for mining GGI and its application in biomarker detection. Science China Information Sciences, 2019, 62(9): Article No. 199104 doi: 10.1007/s11432-018-9704-7
    [6] 姚乃明, 郭清沛, 乔逢春, 陈辉, 王宏安. 基于生成式对抗网络的鲁棒人脸表情识别. 自动化学报, 2018, 44(5): 865−877

    Yao Nai-Ming, Guo Qing-Pei, Qiao Feng-Chun, Chen Hui, Wang Hong-An. Robust facial expression recognition with generative adversarial networks. Acta Automatica Sinica, 2018, 44(5): 865−877
    [7] 袁文浩, 孙文珠, 夏斌, 欧世峰. 利用深度卷积神经网络提高未知噪声下的语音增强性能. 自动化学报, 2018, 44(4): 751−759

    Yuan Wen-Hao, Sun Wen-Zhu, Xia Bin, Ou Shi-Feng. Improving speech enhancement in unseen noise using deep convolutional neural network. Acta Automatica Sinica, 2018, 44(4): 751−759
    [8] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I J, et al. Intriguing properties of neural networks. In: Proceedings of the 2nd International Conference on Learning Representations (ICLR 2014). Banff, AB, Canada: ICLR, 2014.
    [9] Moosavi-Dezfooli S M, Fawzi A, Fawzi O, Frossard P. Universal adversarial perturbations. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 86−94
    [10] Akhtar N, Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 2018, 6: 14410−14430 doi: 10.1109/ACCESS.2018.2807385
    [11] Zeng X H, Liu C X, Wang Y S, Qiu W C, Xie L X, Tai Y W, et al. Adversarial attacks beyond the image space. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). California, USA: IEEE, 2019. 4302−4311
    [12] Deb K, Agarwal S, Pratap A, Meyarivan T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 2002, 6(2): 182−197 doi: 10.1109/4235.996017
    [13] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015). San Diego, CA, USA: ICLR, 2015.
    [14] Kurakin A, Goodfellow I J, Bengio S. Adversarial examples in the physical world. In: Proceedings of the 5th International Conference on Learning Representations (ICLR 2017). Toulon, France: ICLR, 2017.
    [15] Moosavi-Dezfooli S M, Fawzi A, Frossard P. Deepfool: A simple and accurate method to fool deep neural networks. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 2574−2582
    [16] Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In: Proceedings of the 2017 IEEE Symposium on Security and Privacy. San Jose, USA: IEEE, 2017. 39−57
    [17] Papernot N, McDaniel P, Jha S, Fredrikson M, Celik Z B, Swami A. The limitations of deep learning in adversarial settings. In: Proceedings of the 2016 IEEE European Symposium on Security and Privacy. Saarbrucken, Germany: IEEE, 2016. 372−387
    [18] Lyu C, Huang K Z, Liang H N. A unified gradient regularization family for adversarial examples. In: Proceedings of the 2015 IEEE International Conference on Data Mining. Atlantic City, USA: IEEE, 2015. 301−309
    [19] Su J W, Vargas D V, Sakurai K. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 2019, 23(5): 828−841 doi: 10.1109/TEVC.2019.2890858
    [20] Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In: Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). Vancouver, BC, Canada: ICLR, 2018.
    [21] Chen P Y, Zhang H, Sharma Y, Yi J F, Hsieh C J. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. Dallas, USA: ACM, 2017. 15−26
    [22] Tu C C, Ting P S, Chen P Y, Liu S J, Zhang H, Yi J F, et al. Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. Hawaii, USA: AAAI, 2019. 742−749
    [23] Chen J B, Jordan M I. Boundary attack++: Query-efficient decision-based adversarial attack. arXiv: 1904.02144, 2019.
    [24] Chen J Y, Su M M, Shen S J, Xiong H, Zheng H B. POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm. Computers & Security, 2019, 85: 89−106
    [25] Bhagoji A N, He W, Li B. Exploring the space of black-box attacks on deep neural networks. arXiv: 1712.09491, 2017.
    [26] Chen S T, Cornelius C, Martin J, Chau D H. ShapeShifter: Robust physical adversarial attack on faster R-CNN object detector. In: Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Dublin, Ireland: Springer, 2019. 52−68
    [27] Ren S Q, He K M, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137−1149
    [28] Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Tramer F, et al. Physical adversarial examples for object detectors. In: Proceedings of the 12th USENIX Workshop on Offensive Technologies. Baltimore, MD, USA: USENIX Association, 2018.
    [29] Redmon J, Farhadi A. YOLO9000: Better, faster, stronger. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 6517−6525
    [30] Thys S, Ranst W V, Goedemé T. Fooling automated surveillance cameras: Adversarial patches to attack person detection. In: Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Long Beach, USA: IEEE, 2019. 49−55
    [31] Sharif M, Bhagavatula S, Bauer L, Reiter M K. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. Vienna, Austria: ACM, 2016. 1528−1540
    [32] Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C W, et al. Robust physical-world attacks on deep learning visual classification. In: Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1625−1634
    [33] Sitawarin C, Bhagoji A N, Mosenia A, Mittal P, Chiang M. Rogue signs: Deceiving traffic sign recognition with malicious ads and logos. arXiv: 1801.02780, 2018.
    [34] Athalye A, Engstrom L, Ilyas A, Kwok K. Synthesizing robust adversarial examples. In: Proceedings of the 35th International Conference on Machine Learning. Stockholm, Sweden: PMLR, 2018. 284−293
    [35] Li J C, Schmidt F, Kolter Z. Adversarial camera stickers: A physical camera-based attack on deep learning systems. In: Proceedings of the 36th International Conference on Machine Learning. Long Beach, USA: PMLR, 2019. 3896−3904
    [36] Xu Z B, Yang W, Meng A J, Lu N X, Huang H, Ying C C, et al. Towards end-to-end license plate detection and recognition: A large dataset and baseline. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 261−277
  • 加载中
图(10) / 表(10)
计量
  • 文章访问数:  1928
  • HTML全文浏览量:  975
  • PDF下载量:  321
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-07-01
  • 录用日期:  2019-12-23
  • 网络出版日期:  2021-01-29
  • 刊出日期:  2021-01-29

目录

    /

    返回文章
    返回