Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3453688.3461755acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article
Public Access

On the Adversarial Robustness of Quantized Neural Networks

Published: 22 June 2021 Publication History

Abstract

Reducing the size of neural network models is a critical step in moving AI from a cloud-centric to an edge-centric (i.e. on-device) compute paradigm. This shift from cloud to edge is motivated by a number of factors including reduced latency, improved security, and higher flexibility of AI algorithms across several application domains (e.g. transportation, healthcare, defense, etc.). However, it is currently unclear how model compression techniques may affect the robustness of AI algorithms against adversarial attacks. This paper explores the effect of quantization, one of the most common compression techniques, on the adversarial robustness of neural networks. Specifically, we investigate and model the accuracy of quantized neural networks on adversarially-perturbed images. Results indicate that for simple gradient-based attacks, quantization can either improve or degrade adversarial robustness depending on the attack strength.

References

[1]
Fabien Alibart, Elham Zamanidoost, and Dmitri B Strukov. 2013. Pattern classification by memristive crossbar circuits using ex situ and in situ training. Nature communications, Vol. 4, 1 (2013), 1--7.
[2]
Ron Banner, Yury Nahshan, and Daniel Soudry. 2019. Post training 4-bit quantization of convolutional networks for rapid-deployment. In Advances in Neural Information Processing Systems. 7948--7956.
[3]
Remi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre, and France Gardanne. 2019. Adversarial Robustness of Quantized Embedded Neural Networks. In Computer & Electronics Security Applications Rendezvous. 1--33.
[4]
Battista Biggio and Fabio Roli. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, Vol. 84 (2018), 317--331.
[5]
Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. 2017. A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282 (2017).
[6]
Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks: Training deep neural networks with weights and activations constrained to 1 or-1. arXiv preprint arXiv:1602.02830 (2016).
[7]
Nilesh Dalvi, Pedro Domingos, Sumit Sanghai, and Deepak Verma. 2004. Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. 99--108.
[8]
Mike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chinya, Yongqiang Cao, Sri Harsha Choday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Jain, et al. 2018. Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro, Vol. 38, 1 (2018), 82--99.
[9]
Zhen Dong, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2019. Hawq: Hessian aware quantization of neural networks with mixed-precision. In Proceedings of the IEEE International Conference on Computer Vision. 293--302.
[10]
Sorin Draghici. 2002. On the capabilities of neural networks using limited precision weights. Neural networks, Vol. 15, 3 (2002), 395--414.
[11]
Kirsty Duncan, Ekaterina Komendantskaya, Robert Stewart, and Michael Lones. 2020. Relative Robustness of Quantized Neural Networks Against Adversarial Attacks. In 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 1--8.
[12]
Angus Galloway, Graham W Taylor, and Medhat Moussa. 2017. Attacking binarized neural networks. arXiv preprint arXiv:1711.00449 (2017).
[13]
I Goodfellow, J. Shlens, and C. Szegedy. 2015. Explaining and Harnessing Adversarial Examples.
[14]
Yunhui Guo. 2018. A survey on methods and theories of quantized neural networks. arXiv preprint arXiv:1808.04752 (2018).
[15]
Anthony D Joseph, Blaine Nelson, Benjamin IP Rubinstein, and JD Tygar. 2018. Adversarial Machine Learning. Cambridge University Press.
[16]
Ji Lin, Chuang Gan, and Song Han. 2019. Defensive quantization: When efficiency meets robustness. arXiv preprint arXiv:1904.08444 (2019).
[17]
Daniel Lowd and Christopher Meek. 2005 a. Adversarial learning. In Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. 641--647.
[18]
Daniel Lowd and Christopher Meek. 2005 b. Good Word Attacks on Statistical Spam Filters. In CEAS, Vol. 2005.
[19]
Cory Merkel and Dhireesha Kudithipudi. 2015. Comparison of off-chip training methods for neuromemristive systems. In International Conference on VLSI Design.
[20]
Paul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andrew S Cassidy, Jun Sawada, Filipp Akopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakamura, et al. 2014. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, Vol. 345, 6197 (2014), 668--673.
[21]
Stanley Rabinowitz et al. 1989. The volume of an n-simplex with many equal edges. Missouri Journal of Mathematical Sciences, Vol. 1, 2 (1989), 11--17.
[22]
Chang Song, Elias Fallon, and Hai Li. 2020. Improving Adversarial Robustness in Weight-quantized Neural Networks. arXiv preprint arXiv:2012.14965 (2020).
[23]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[24]
Yevgeniy Vorobeychik and Murat Kantarcioglu. 2018. Adversarial machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, Vol. 12, 3 (2018), 1--169.
[25]
Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. 2019. Haq: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE conference on computer vision and pattern recognition. 8612--8620.
[26]
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. 2017. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044 (2017).

Cited By

View all
  • (2024)Trustworthy and Robust Machine Learning for Multimedia: Challenges and Perspectives2024 IEEE 7th International Conference on Multimedia Information Processing and Retrieval (MIPR)10.1109/MIPR62202.2024.00090(522-528)Online publication date: 7-Aug-2024
  • (2024)David and Goliath: An Empirical Evaluation of Attacks and Defenses for QNNs at the Deep Edge2024 IEEE 9th European Symposium on Security and Privacy (EuroS&P)10.1109/EuroSP60621.2024.00035(524-541)Online publication date: 8-Jul-2024
  • (2024)Adversarial Training on Limited-Resource Devices Through Asymmetric Disturbances2024 20th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT)10.1109/DCOSS-IoT61029.2024.00015(27-34)Online publication date: 29-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
GLSVLSI '21: Proceedings of the 2021 Great Lakes Symposium on VLSI
June 2021
504 pages
ISBN:9781450383936
DOI:10.1145/3453688
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 June 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial machine learning
  2. neural networks
  3. quantization

Qualifiers

  • Research-article

Funding Sources

Conference

GLSVLSI '21
Sponsor:
GLSVLSI '21: Great Lakes Symposium on VLSI 2021
June 22 - 25, 2021
Virtual Event, USA

Acceptance Rates

Overall Acceptance Rate 312 of 1,156 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)183
  • Downloads (Last 6 weeks)27
Reflects downloads up to 21 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Trustworthy and Robust Machine Learning for Multimedia: Challenges and Perspectives2024 IEEE 7th International Conference on Multimedia Information Processing and Retrieval (MIPR)10.1109/MIPR62202.2024.00090(522-528)Online publication date: 7-Aug-2024
  • (2024)David and Goliath: An Empirical Evaluation of Attacks and Defenses for QNNs at the Deep Edge2024 IEEE 9th European Symposium on Security and Privacy (EuroS&P)10.1109/EuroSP60621.2024.00035(524-541)Online publication date: 8-Jul-2024
  • (2024)Adversarial Training on Limited-Resource Devices Through Asymmetric Disturbances2024 20th International Conference on Distributed Computing in Smart Systems and the Internet of Things (DCOSS-IoT)10.1109/DCOSS-IoT61029.2024.00015(27-34)Online publication date: 29-Apr-2024
  • (2024)Distributed computing in multi-agent systems: a survey of decentralized machine learning approachesComputing10.1007/s00607-024-01356-0107:1Online publication date: 19-Nov-2024
  • (2023)Adversarial Training Method for Machine Learning Model in a Resource-Constrained EnvironmentProceedings of the 19th ACM International Symposium on QoS and Security for Wireless and Mobile Networks10.1145/3616391.3622768(87-95)Online publication date: 30-Oct-2023
  • (2023)QANS: Toward Quantized Neural Network Adversarial Noise SuppressionIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems10.1109/TCAD.2023.328393542:12(4858-4870)Online publication date: 8-Jun-2023
  • (2023)Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence2023 IEEE Symposium Series on Computational Intelligence (SSCI)10.1109/SSCI52147.2023.10371946(671-676)Online publication date: 5-Dec-2023
  • (2023)Resource-Limited Localized Adaptive Adversarial Training for Machine Learning Model2023 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)10.1109/ISPA-BDCloud-SocialCom-SustainCom59178.2023.00178(1113-1120)Online publication date: 21-Dec-2023
  • (2023)Improving Robustness Against Adversarial Attacks with Deeply Quantized Neural Networks2023 International Joint Conference on Neural Networks (IJCNN)10.1109/IJCNN54540.2023.10191429(1-8)Online publication date: 18-Jun-2023
  • (2022)A Survey of State-of-the-art on Edge Computing: Theoretical Models, Technologies, Directions, and Development PathsIEEE Access10.1109/ACCESS.2022.317610610(54038-54063)Online publication date: 2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media