Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Research Progress and Challenges on Application-Driven Adversarial Examples: A Survey

Published: 22 September 2021 Publication History

Abstract

Great progress has been made in deep learning over the past few years, which drives the deployment of deep learning–based applications into cyber-physical systems. But the lack of interpretability for deep learning models has led to potential security holes. Recent research has found that deep neural networks are vulnerable to well-designed input examples, called adversarial examples. Such examples are often too small to detect, but they completely fool deep learning models. In practice, adversarial attacks pose a serious threat to the success of deep learning. With the continuous development of deep learning applications, adversarial examples for different fields have also received attention. In this article, we summarize the methods of generating adversarial examples in computer vision, speech recognition, and natural language processing and study the applications of adversarial examples. We also explore emerging research and open problems.

References

[1]
Mozilla. n.d. Common Voice. Retrieved July 6, 2021 from https://commonvoice.mozilla.org/en.
[2]
Mahdieh Abbasi and Christian Gagné. 2017. Robustness to adversarial examples through an ensemble of specialists. arXiv:1702.06856.
[3]
Sajjad Abdoli, Luiz G. Hafemann, Jerome Rony, Ismail Ben Ayed, Patrick Cardinal, and Alessandro L. Koerich. 2019. Universal adversarial audio perturbations. arXiv:1908.03173.
[4]
Naveed Akhtar, Jian Liu, and Ajmal Mian. 2018. Defense against universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3389–3398.
[5]
Naveed Akhtar and Ajmal Mian. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6 (2018), 14410–14430.
[6]
Moustafa Alzantot, Bharathan Balaji, and Mani Srivastava. 2018. Did you hear that? Adversarial examples against automatic speech recognition. arXiv:1801.00554.
[7]
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial examples. arXiv:1804.07998.
[8]
Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. 2017. Synthesizing robust adversarial examples. arXiv:1707.07397.
[9]
Arjun Nitin Bhagoji, Daniel Cullina, Chawin Sitawarin, and Prateek Mittal. 2018. Enhancing robustness of machine learning systems via data transformations. In Proceedings of the 2018 52nd Annual Conference on Information Sciences and Systems (CISS'18). 1–5.
[10]
Siddhant Bhambri, Sumanyu Muku, Avinash Tulasi, and Arun Balaji Buduru. 2019. A study of black box adversarial attacks in computer vision. arXiv:1912.01667.
[11]
John Bradshaw, Alexander G. de G. Matthews, and Zoubin Ghahramani. 2017. Adversarial examples, uncertainty, and transfer testing robustness in Gaussian process hybrid deep networks. arXiv:1707.02476.
[12]
Sébastien Bubeck, Eric Price, and Ilya Razenshteyn. 2018. Adversarial examples from computational constraints. arXiv:1805.10204.
[13]
Nicholas Carlini, Pratyush Mishra, Tavish Vaidya, Yuankai Zhang, Micah Sherr, Clay Shields, David A. Wagner, and Wenchao Zhou. 2016. Hidden voice commands. In Proceedings of the 25th USENIX Security Symposium. 513–530.
[14]
Nicholas Carlini and David Wagner. 2018. Audio adversarial examples: Targeted attacks on speech-to-text. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW'18). 1–7.
[15]
Kuei-Huan Chang, Po-Hao Huang, Honggang Yu, Yier Jin, and Ting-Chi Wang. 2020. Audio adversarial examples generation with recurrent neural networks. In Proceedings of the 2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC'20).
[16]
Aditya Chattopadhyay, Anirban Sarkar, Prantik Howlader, and Vineeth N. Balasubramanian. 2018. Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV'18). 839–847.
[17]
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv:1312.3005.
[18]
Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. 2015. DeepDriving: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision. 2722–2730.
[19]
Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. 2018. Attacking visual language grounding with adversarial examples: A case study on neural image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, (Volume 1: Long Papers). 2587–2597.
[20]
Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of the 8th Workshop on Syntax, Semantics, and Structure in Statistical Translation (SSST@EMNLP'14). 103–111.
[21]
Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. 2017. Parseval networks: Improving robustness to adversarial examples. In Proceedings of the 34th International Conference on Machine Learning (Volume 70). 854–863.
[22]
Nilaksh Das, Madhuri Shanbhogue, Shang-Tse Chen, Fred Hohman, Li Chen, Michael E. Kounavis, and Duen Horng Chau. 2017. Keeping the bad guys out: Protecting and vaccinating deep learning with JPEG compression. arXiv:1705.02900.
[23]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. 248–255.
[24]
Li Deng. 2012. The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Processing Magazine 29, 6 (2012), 141–142.
[25]
Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 9185–9193.
[26]
Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. 2019. Evading defenses to transferable adversarial examples by translation-invariant attacks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4312–4321.
[27]
Yinpeng Dong, Hang Su, Jun Zhu, and Fan Bao. 2017. Towards interpretable deep neural networks by leveraging adversarial examples. arXiv:1708.05493.
[28]
Gintare Karolina Dziugaite, Zoubin Ghahramani, and Daniel M. Roy. 2016. A study of the effect of JPG compression on adversarial images. arXiv:1608.00853.
[29]
Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotflip: White-box adversarial examples for text classification. arXiv:1712.06751.
[30]
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. Robust physical-world attacks on deep learning models. arXiv:1707.08945.
[31]
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. 2017. Robust physical-world attacks on deep learning models. arXiv:1707.08945.
[32]
Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW'18). 50–56.
[33]
Jort F. Gemmeke, Daniel P. W. Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R. Channing Moore, Manoj Plakal, and Marvin Ritter. 2017. Audio set: An ontology and human-labeled dataset for audio events. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'17). 776–780.
[34]
Yuan Gong and Christian Poellabauer. 2017. Crafting adversarial examples for speech paralinguistics applications. arXiv:1711.03280.
[35]
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv:1412.6572.
[36]
Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. 2017. Countering adversarial images using input transformations. arXiv:1711.00117.
[37]
Junfeng Guo and Cong Liu. 2020. Practical poisoning attacks on neural networks. In Proceedings of the 16th European Conference on Computer Vision (ECCV'20). 142–158.
[38]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision. 1026–1034.
[39]
S. Hochreiter and J. Schmidhuber. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 1735–1780.
[40]
Shengshan Hu, Xingcan Shang, Zhan Qin, Minghui Li, Qian Wang, and Cong Wang. 2019. Adversarial examples for automatic speech recognition: Attacks and countermeasures. IEEE Communications Magazine 57, 10 (2019), 120–126.
[41]
Qian Huang, Isay Katsman, Horace He, Zeqi Gu, Serge Belongie, and Ser-Nam Lim. 2019. Enhancing adversarial example transferability with an intermediate level attack. In Proceedings of the IEEE International Conference on Computer Vision. 4733–4742.
[42]
Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesvári. 2015. Learning with a strong adversary. arXiv:1511.03034.
[43]
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. 2019. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems. 125–136.
[44]
Dan Iter, Jade Huang, and Mike Jermann. 2017. Generating Adversarial Examples for Speech Recognition. Technical Report. Stanford University, Stanford, CA.
[45]
Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP'17). 2021–2031.
[46]
Xiaojun Jia, Xingxing Wei, Xiaochun Cao, and Hassan Foroosh. 2019. ComDefend: An efficient image compression model to defend adversarial examples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6084–6092.
[47]
Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, and Yu-Gang Jiang. 2019. Black-box adversarial attacks on video recognition models. arXiv:1904.05181.
[48]
Guoqing Jin, Shiwei Shen, Dongming Zhang, Feng Dai, and Yongdong Zhang. 2019. APE-GAN: Adversarial Perturbation Elimination with GAN. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'19). 3842–3846.
[49]
Dongyeop Kang, Tushar Khot, Ashish Sabharwal, and Eduard Hovy. 2018. Adventure: Adversarial training for textual entailment with knowledge-guided examples. arXiv:1805.04680.
[50]
Zelun Kong, Junfeng Guo, Ang Li, and Cong Liu. 2020. PhysGAN: Generating physical-world-resilient adversarial examples for autonomous driving. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR'20). 14242–14251.
[51]
Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report. University of Toronto.
[52]
Alex Krizhevsky, I. Sutskever, and G. Hinton. 2012. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25, 2 (2012), 1–9.
[53]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097–1105.
[54]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097–1105.
[55]
Alex Kurakin, Dan Boneh, Florian Tramèr, Ian Goodfellow, Nicolas Papernot, and Patrick McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. arXiv:1705.07204.
[56]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial examples in the physical world. arXiv:1607.02533.
[57]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial machine learning at scale. arXiv:1611.01236.
[58]
Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. TextBugger: Generating adversarial text against real-world applications. arXiv:1812.05271.
[59]
Shasha Li, Ajaya Neupane, Sujoy Paul, Chengyu Song, Srikanth V. Krishnamurthy, Amit K. Roy Chowdhury, and Ananthram Swami. 2018. Adversarial perturbations against real-time video classification systems. arXiv:1807.00458.
[60]
Yandong Li, Lijun Li, Liqiang Wang, Tong Zhang, and Boqing Gong. 2019. NATTACK: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. arXiv:1905.00441.
[61]
Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text classification can be fooled. arXiv:1704.08006.
[62]
Daniel Liu, Ronald Yu, and Hao Su. 2019. Extending adversarial attacks and defenses to deep 3D point cloud classifiers. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP'19). 2279–2283.
[63]
Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv:1611.02770.
[64]
Zihao Liu, Qi Liu, Tao Liu, Yanzhi Wang, and Wujie Wen. 2018. Feature distillation: DNN-oriented JPEG compression against adversarial examples. arXiv:1803.05787.
[65]
Jiajun Lu, Hussein Sibai, Evan Fabry, and David A. Forsyth. 2017. NO need to worry about adversarial examples in object detection in autonomous vehicles. arXiv:1707.03501.
[66]
Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of the 11th Annual Conference of the International Speech Communication Association.
[67]
Vahid Mirjalili and Arun Ross. 2017. Soft biometric privacy: Retaining biometric utility of face images while perturbing gender. In Proceedings of the 2017 IEEE International Joint Conference on Biometrics (IJCB'17). 564–573.
[68]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2574–2582.
[69]
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016. Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. arXiv:1605.07277.
[70]
Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 506–519.
[71]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P'16). 372–387.
[72]
Nicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting adversarial input sequences for recurrent neural networks. In Proceedings of the 2016 IEEE Military Communications Conference (MILCOM'16). 49–54.
[73]
Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016. Distillation as a defense to adversarial perturbations against deep neural networks. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP'16). 582–597.
[74]
Omkar M. Parkhi, Andrea Vedaldi, and Andrew Zisserman. 2015. Deep face recognition. In Proceedings of the 2015 British Machine Vision Conference (Volume 1). 6.
[75]
Pengfei Qiu, Qian Wang, Dongsheng Wang, and Yongqiang Lyu. 2020. Mitigating adversarial attacks for deep neural networks by input deformation and augmentation. In Proceedings of the 25th Asia and South Pacific Design Automation Conference (ASPDAC'20).
[76]
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv:1606.05250.
[77]
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis & Machine Intelligence 39, 6 (2015), 1137–1149.
[78]
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1135–1144.
[79]
Andras Rozsa, Manuel Günther, Ethan M. Rudd, and Terrance E. Boult. 2016. Are facial attributes adversarially robust? In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR'16). 3121–3127.
[80]
Andras Rozsa, Manuel Günther, Ethan M. Rudd, and Terrance E. Boult. 2019. Facial attributes: Accuracy and adversarial robustness. Pattern Recognition Letters 124 (2019), 100–108.
[81]
Suranjana Samanta and Sameep Mehta. 2017. Towards crafting text adversarial samples. arXiv:1707.02812.
[82]
George Saon, Hong Kwang J. Kuo, Steven Rennie, and Michael Picheny. 2015. The IBM 2015 english conversational telephone speech recognition system. EURASIP Journal on Advances in Signal Processing 2008, 1 (2015), 1–15.
[83]
Sebastian Bach, Alexander Binder, Gregoire Montavon, Frederick Klauschen, Klaus-Robert Muller, and Wojciech Samek. 2015. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 10, 7 (2015), e0130140.
[84]
Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. 2017. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV'17). 618–626.
[85]
Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. 2016. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. 1528–1540.
[86]
Sijie Shen, Ryosuke Furuta, Toshihiko Yamasaki, and Kiyoharu Aizawa. 2017. Fooling neural networks in face attractiveness evaluation: Adversarial examples with high attractiveness score but low subjective score. In Proceedings of the 2017 IEEE 3rd International Conference on Multimedia Big Data (BigMM'17). 66–69.
[87]
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning (ICML'17). 3145–3153.
[88]
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Proceedings of the 2nd International Conference on Learning Representations (ICLR'14).
[89]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556.
[90]
Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. 2017. PixelDefend: Leveraging generative models to understand and defend against adversarial examples. arXiv:1710.10766.
[91]
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. 2015. Striving for Simplicity: The all convolutional net. In Proceedings of the 3rd International Conference on Learning Representations (ICLR'15). http://arxiv.org/abs/1412.6806.
[92]
Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. 2019. One pixel attack for fooling deep neural networks. arXiv:1710.08864.
[93]
Sining Sun, Ching-Feng Yeh, Mari Ostendorf, Mei-Yuh Hwang, and Lei Xie. 2018. Training augmentation with adversarial examples for robust speech recognition. arXiv:1806.02782.
[94]
I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. arXiv:1409.3215.
[95]
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A. Alemi. 2017. Inception-v4, Inception-ResNet and the impact of residual connections on learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence.
[96]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv:1312.6199.
[97]
Rohan Taori, Amog Kamsetty, Brenton Chu, and Nikita Vemuri. 2019. Targeted adversarial examples for black box audio systems. In Proceedings of the 2019 IEEE Security and Privacy Workshops (SPW'19). 15–20.
[98]
Dayong Wang, Aditya Khosla, Rishab Gargeya, Humayun Irshad, and Andrew H. Beck. 2016. Deep learning for identifying metastatic breast cancer. arXiv:1606.05718.
[99]
Hao Wang, Naiyan Wang, and Dit-Yan Yeung. 2015. Collaborative deep learning for recommender systems. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1235–1244.
[100]
Wenqi Wang, Benxiao Tang, Run Wang, Lina Wang, and Aoshuang Ye. 2019. A survey on adversarial attacks and defenses in text. arXiv:1902.07285.
[101]
Yulong Wang, Hang Su, Bo Zhang, and Xiaolin Hu. 2018. Interpret neural networks by identifying critical data routing paths. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'18). 8906–8914.
[102]
Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. 2019. Adversarial examples for graph data: Deep insights into attack and defense. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI'19). 4816–4823.
[103]
Chong Xiang, Charles R. Qi, and Bo Li. 2019. Generating 3D adversarial point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR'19). 9136–9144.
[104]
Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. 2017. Mitigating adversarial effects through randomization. arXiv:1711.01991.
[105]
Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L. Yuille. 2019. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2730–2739.
[106]
Zirui Xu, Fuxun Yu, and Xiang Chen. 2019. LanCe: A comprehensive and lightweight CNN defense methodology against physical adversarial attacks on embedded multimedia applications. arXiv:1910.08536.
[107]
Hiromu Yakura and Jun Sakuma. 2018. Robust audio adversarial example for a physical attack. arXiv:1810.11793.
[108]
Kaichen Yang, Tzungyu Tsai, Tsung-Yi Ho, and Yier Jin. 2020. Robust adversarial objects against deep learning models. In Proceedings of the AAAI Conference on Artificial Intelligence.
[109]
Kaichen Yang, Tzungyu Tsai, Honggang Yu, Tsung-Yi Ho, and Yier Jin. 2020. Beyond digital domain: Fooling deep learning based recognition system in physical world. In Proceedings of the AAAI Conference on Artificial Intelligence.
[110]
Zhuolin Yang, Bo Li, Pin-Yu Chen, and Dawn Song. 2018. Towards mitigating audio adversarial perturbations. In Proceedings of the 6th International Conference on Learning Representations (ICLR'18).
[111]
Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang, and Carl A. Gunter. 2018. CommanderSong: A systematic approach for practical adversarial voice recognition. In Proceedings of the 27th USENIX Security Symposium (USENIX Security'18). 49–64.
[112]
Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2019. Adversarial examples: Attacks and defenses for deep learning. IEEE Transactions on Neural Networks and Learning Systems 30, 9 (2019), 2805–2824.
[113]
Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Proceedings of the 2014 13th European Conference on Computer Vision (ECCV'14): Part I. 818–833.
[114]
Xiaohui Zeng, Chenxi Liu, Yu-Siang Wang, Weichao Qiu, Lingxi Xie, Yu-Wing Tai, Chi-Keung Tang, and Alan L. Yuille. 2019. Adversarial attacks beyond the image space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR'19). 4302–4311.
[115]
Xinyang Zhang, Ningfei Wang, Hua Shen, Shouling Ji, Xiapu Luo, and Ting Wang. 2018. Interpretable deep learning under fire. arXiv:1812.00891.
[116]
Yiren Zhao, Ilia Shumailov, Robert Mullins, and Ross Anderson. 2018. To compress or not to compress: Understanding the interactions between adversarial attacks and neural network compression. arXiv:1810.00208.
[117]
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. 2016. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2921–2929.
[118]
Husheng Zhou, Wei Li, Zelun Kong, Junfeng Guo, Yuqun Zhang, Bei Yu, Lingming Zhang, and Cong Liu. 2020. DeepBillboard: Systematic physical-world testing of autonomous driving systems. In Proceedings of the 42nd International Conference on Software Engineering (ICSE'20). 347–358.
[119]
Man Zhou, Zhan Qin, Xiu Lin, Shengshan Hu, Qian Wang, and Kui Ren. 2019. Hidden voice commands: Attacks and defenses on the VCS of autonomous driving cars. IEEE Wireless Communications 26, 5 (2019), 128–133.
[120]
Zhixuan Zhou, Huankang Guan, Meghana Moorthy Bhat, and Justin Hsu. 2019. Fake news detection via NLP is vulnerable to adversarial attacks. In Proceedings of the 11th International Conference on Agents and Artificial Intelligence (ICAART'19). 794–800.
[121]
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2847–2856.

Cited By

View all
  • (2024)DCVAE-adv: A Universal Adversarial Example Generation Method for White and Black Box AttacksTsinghua Science and Technology10.26599/TST.2023.901000429:2(430-446)Online publication date: Apr-2024
  • (2024)Vulnerability of Machine Learning Approaches Applied in IoT-Based Smart Grid: A ReviewIEEE Internet of Things Journal10.1109/JIOT.2024.334938111:11(18951-18975)Online publication date: 1-Jun-2024
  • (2024)Robust Distillation via Untargeted and Targeted Intermediate Adversarial Samples2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02686(28432-28442)Online publication date: 16-Jun-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Cyber-Physical Systems
ACM Transactions on Cyber-Physical Systems  Volume 5, Issue 4
October 2021
312 pages
ISSN:2378-962X
EISSN:2378-9638
DOI:10.1145/3481689
  • Editor:
  • Chenyang Lu
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Journal Family

Publication History

Published: 22 September 2021
Accepted: 01 February 2021
Revised: 01 January 2021
Received: 01 August 2020
Published in TCPS Volume 5, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Adversarial examples
  2. adversarial attacks
  3. application
  4. computer vision
  5. speech recognition
  6. natural language processing

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • National Natural Science Foundation of China
  • Research Fund of National Key Laboratory of Computer Architecture

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)90
  • Downloads (Last 6 weeks)9
Reflects downloads up to 22 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)DCVAE-adv: A Universal Adversarial Example Generation Method for White and Black Box AttacksTsinghua Science and Technology10.26599/TST.2023.901000429:2(430-446)Online publication date: Apr-2024
  • (2024)Vulnerability of Machine Learning Approaches Applied in IoT-Based Smart Grid: A ReviewIEEE Internet of Things Journal10.1109/JIOT.2024.334938111:11(18951-18975)Online publication date: 1-Jun-2024
  • (2024)Robust Distillation via Untargeted and Targeted Intermediate Adversarial Samples2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10.1109/CVPR52733.2024.02686(28432-28442)Online publication date: 16-Jun-2024
  • (2024)Security risks and countermeasures of adversarial attacks on AI-driven applications in 6G networks: A surveyJournal of Network and Computer Applications10.1016/j.jnca.2024.104031232(104031)Online publication date: Dec-2024
  • (2024)Enhancing cross-domain transferability of black-box adversarial attacks on speaker recognition systems using linearized backpropagationPattern Analysis & Applications10.1007/s10044-024-01269-w27:2Online publication date: 13-May-2024
  • (2024)An Image‐Based Approach to Automated Recognition of Asbestos‐Containing Components in Wall Demolition WasteChemie Ingenieur Technik10.1002/cite.20230017096:7(958-968)Online publication date: 23-May-2024
  • (2023)A Distributed Projection Neurodynamic Approach for Solving BP Denoising Problem in Sparse RecoveryJournal of Circuits, Systems and Computers10.1142/S021812662350280832:16Online publication date: 12-Jun-2023
  • (2023)Query-Efficient Generation of Adversarial Examples for Defensive DNNs via Multiobjective OptimizationIEEE Transactions on Evolutionary Computation10.1109/TEVC.2022.323146027:4(832-847)Online publication date: 1-Aug-2023
  • (2023)Adversarial Attacks and Defenses in Machine Learning-Empowered Communication Systems and Networks: A Contemporary SurveyIEEE Communications Surveys & Tutorials10.1109/COMST.2023.331949225:4(2245-2298)Online publication date: 26-Sep-2023
  • (2022)Adversarial Robustness in Hybrid Quantum-Classical Deep Learning for Botnet DGA DetectionJournal of Information Processing10.2197/ipsjjip.30.63630(636-644)Online publication date: 2022
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media