Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3453688.3461757acmconferencesArticle/Chapter ViewAbstractPublication PagesglsvlsiConference Proceedingsconference-collections
research-article

Adversarial Attack Mitigation Approaches Using RRAM-Neuromorphic Architectures

Published: 22 June 2021 Publication History

Abstract

The rising trend and advancements in machine learning has resulted into its numerous applications in the field of computer vision, pattern recognition to providing security to hardware devices. Eventhough the proven achievements showcased by advancement in machine learning, one can exploit the vulnerabilities in those techniques by feeding adversaries. Adversarial samples are generated by well crafting and adding perturbations to the normal input samples. There exists majority of the software based adversarial attacks and defenses. In this paper, we demonstrate the effects of adversarial attacks on a reconfigurable RRAM-neuromorphic architecture with different learning algorithms and device characteristics. We also propose an integrated solution for mitigating the effects of the adversarial attack using the reconfigurable RRAM architecture.

Supplemental Material

MP4 File
This video is a summary of our work in ?Adversarial Attack Mitigation Approaches Using RRAM-Neuromorphic Architectures?. It begins by listing information about the author and co-authors. Then provides a brief background of the adversarial attacks, attack mitigation, gated RRAM, and the neuromorphic crossbar. Finally, we present the results of our investigation and provide discussion of the results and future work.

References

[1]
Abhiroop Bhattacharjee and Priyadarshini Panda. 2020. Rethinking non-idealities in memristive crossbars for adversarial robustness in neural networks. arXiv preprint arXiv:2008.11298 (2020).
[2]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012).
[3]
Nicholas Carlini and David A.Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. 2017 IEEE Symposium on Security and Privacy (SP) (2017), 39--57.
[4]
Y. Dong, Fangzhou Liao, Tianyu Pang, H. Su, J. Zhu, Xiaolin Hu, and J. Li. 2018. Boosting Adversarial Attacks with Momentum. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (2018), 9185--9193.
[5]
T. Bailey E. Herrmann, A. Rush and R. Jha. April 2018. Gate Controlled Three- Terminal Metal Oxide Memristor. IEEE Electron Device Letters vol. 39, no. 4, pp. 500--503 (April 2018).
[6]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[7]
Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. 2019. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics (11 2019).
[8]
Kathrin Grosse, P. Manoharan, Nicolas Papernot, M. Backes, and P. Mc- Daniel. 2017. On the (Statistical) Detection of Adversarial Examples. ArXiv abs/1702.06280 (2017).
[9]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[10]
A. Jones and R. Jha. [n.d.]. A Compact Gated-Synapse Model for Neuromorphic Circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 10.1109/TCAD.2020.3028534 ([n. d.]).
[11]
Edward Kim, Jessica Yarnall, Priya Shah, and Garrett T Kenyon. 2019. A neuromorphic sparse coding defense to adversarial images. In Proceedings of the International Conference on Neuromorphic Systems. 1--8.
[12]
Alexey Kurakin, I. Goodfellow, and S. Bengio. 2017. Adversarial examples in the physical world. ArXiv abs/1607.02533 (2017).
[13]
Yann LeCun and Corinna Cortes. 2010. MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist/. (2010). http://yann.lecun.com/exdb/mnist/
[14]
Dongyu Meng and Hao Chen. 2017. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC conference on computer and communications security. 135--147.
[15]
J. H. Metzen, Tim Genewein, Volker Fischer, and B. Bischoff. 2017. On Detecting Adversarial Perturbations. ArXiv abs/1702.04267 (2017).
[16]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and P. Frossard. 2016. Deep- Fool: A Simple and Accurate Method to Fool Deep Neural Networks. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), 2574--2582.
[17]
Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. 2017. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 27--38.
[18]
Blaine Nelson, Marco Barreno, F. J. Chi, A. Joseph, Benjamin I. P. Rubinstein, Udam Saini, Charles Sutton, J. Tygar, and Kai Xia. 2008. Exploiting Machine Learning to Subvert Your Spam Filter. In LEET.
[19]
Nicolas Papernot, Fartash Faghri, Nicholas Carlini, I. Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, T. Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, W. Gierke, Y. Dong, David Berthelot, P. Hendricks, Jonas Rauber, Rujun Long, and P. McDaniel. 2016. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv: Learning (2016).
[20]
Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (Euro S&P). IEEE, 372--387.
[21]
Nicolas Papernot, P. McDaniel, Xi Wu, S. Jha, and A. Swami. 2016. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. 2016 IEEE Symposium on Security and Privacy (SP) (2016), 582--597.
[22]
Benjamin IP Rubinstein, Blaine Nelson, Ling Huang, Anthony D Joseph, Shinghon Lau, Satish Rao, Nina Taft, and J Doug Tygar. 2009. Antidote: understanding and defending against poisoning of anomaly detectors. In Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement. 1--14.
[23]
Uri Shaham, Yutaro Yamada, and Sahand Negahban. 2015. Understanding adversarial training: Increasing local stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432 (2015).
[24]
S. Shukla and et al. 2019. Stealthy malware detection using rnn-based automated localized feature extraction and classifier. In ICTAI. IEEE.
[25]
S. Shukla and et al. 2019. Work-in-Progress: MicroArchitectural Events and Image Processing-based Hybrid Approach for Robust Malware Detection. In CASES.
[26]
Jack Stilgoe. 2017. Machine learning, social learning and the governance of self-driving cars. Social Studies of Science (11 2017).
[27]
Christian Szegedy,Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
[28]
S. Barve J. Wells T. J. Bailey, A. J. Ford and R. Jha. [n.d.]. Development of a Short-Term to Long-Term Supervised Spiking Neural Network Processor. IEEE Transactions on Very Large Scale Integration (VLSI) Systems ([n. d.]).

Cited By

View all
  • (2023)AD2VNCS: Adversarial Defense and Device Variation-tolerance in Memristive Crossbar-based Neuromorphic Computing SystemsACM Transactions on Design Automation of Electronic Systems10.1145/360023129:1(1-19)Online publication date: 15-Nov-2023
  • (2022)Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural Networks with Power Consumption Information2022 IEEE 35th International System-on-Chip Conference (SOCC)10.1109/SOCC56010.2022.9908114(1-6)Online publication date: 5-Sep-2022
  • (2022)A Survey on Machine Learning Accelerators and Evolutionary Hardware PlatformsIEEE Design & Test10.1109/MDAT.2022.316112639:3(91-116)Online publication date: Jun-2022
  • Show More Cited By

Index Terms

  1. Adversarial Attack Mitigation Approaches Using RRAM-Neuromorphic Architectures

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      GLSVLSI '21: Proceedings of the 2021 Great Lakes Symposium on VLSI
      June 2021
      504 pages
      ISBN:9781450383936
      DOI:10.1145/3453688
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 22 June 2021

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. adversarial attack mitigation
      2. neuromorphic
      3. resistive random-access memory

      Qualifiers

      • Research-article

      Data Availability

      This video is a summary of our work in ?Adversarial Attack Mitigation Approaches Using RRAM-Neuromorphic Architectures?. It begins by listing information about the author and co-authors. Then provides a brief background of the adversarial attacks, attack mitigation, gated RRAM, and the neuromorphic crossbar. Finally, we present the results of our investigation and provide discussion of the results and future work. https://dl.acm.org/doi/10.1145/3453688.3461757#GLSVLSI21-vlsi54s.mp4

      Funding Sources

      • National Science Foundation

      Conference

      GLSVLSI '21
      Sponsor:
      GLSVLSI '21: Great Lakes Symposium on VLSI 2021
      June 22 - 25, 2021
      Virtual Event, USA

      Acceptance Rates

      Overall Acceptance Rate 312 of 1,156 submissions, 27%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)27
      • Downloads (Last 6 weeks)4
      Reflects downloads up to 25 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)AD2VNCS: Adversarial Defense and Device Variation-tolerance in Memristive Crossbar-based Neuromorphic Computing SystemsACM Transactions on Design Automation of Electronic Systems10.1145/360023129:1(1-19)Online publication date: 15-Nov-2023
      • (2022)Enhancing Adversarial Attacks on Single-Layer NVM Crossbar-Based Neural Networks with Power Consumption Information2022 IEEE 35th International System-on-Chip Conference (SOCC)10.1109/SOCC56010.2022.9908114(1-6)Online publication date: 5-Sep-2022
      • (2022)A Survey on Machine Learning Accelerators and Evolutionary Hardware PlatformsIEEE Design & Test10.1109/MDAT.2022.316112639:3(91-116)Online publication date: Jun-2022
      • (2021)Power Swapper: Approximate Functional Block Assisted Cryptosystem Security2021 IEEE 34th International System-on-Chip Conference (SOCC)10.1109/SOCC52499.2021.9739304(101-105)Online publication date: 14-Sep-2021

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media