Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-030-58526-6_24guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-linear Activations

Published: 23 August 2020 Publication History

Abstract

In the recent quest for trustworthy neural networks, we present Spiking Neural Network (SNN) as a potential candidate for inherent robustness against adversarial attacks. In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts for CIFAR datasets on deep VGG and ResNet architectures, particularly in blackbox attack scenario. We attribute this robustness to two fundamental characteristics of SNNs and analyze their effects. First, we exhibit that input discretization introduced by the Poisson encoder improves adversarial robustness with reduced number of timesteps. Second, we quantify the amount of adversarial accuracy with increased leak rate in Leaky-Integrate-Fire (LIF) neurons. Our results suggest that SNNs trained with LIF neurons and smaller number of timesteps are more robust than the ones with IF (Integrate-Fire) neurons and larger number of timesteps. Also we overcome the bottleneck of creating gradient-based adversarial inputs in temporal domain by proposing a technique for crafting attacks from SNN (https://github.com/ssharmin/spikingNN-adversarial-attack).

References

[1]
Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: Proceedings of the International Conference on Machine Learning (ICML) (2018)
[2]
Bagheri, A., Simeone, O., Rajendran, B.: Adversarial training for probabilistic spiking neural networks. In: 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC) (2018)
[3]
Bellec, G., Salaj, D., Subramoney, A., Legenstein, R., Maass, W.: Long short-term memory and learning-to-learn in networks of spiking neurons. In: Advances in Neural Information Processing Systems, pp. 787–797 (2018)
[4]
Biggio B et al. Blockeel H, Kersting K, Nijssen S, Železný F, et al. Evasion attacks against machine learning at test time Machine Learning and Knowledge Discovery in Databases 2013 Heidelberg Springer 387-402
[5]
Buckman, J., Roy, A., Raffel, C., Goodfellow, I.: Thermometer encoding: one hot way to resist adversarial examples. In: International Conference on Learning Representations (ICLR) (2018)
[6]
Carlini, N., Wagner, D.: Defensive distillation is not robust to adversarial examples (2016), arXiv:1607.04311
[7]
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP). pp. 39–57 (2017)
[8]
Dhillon, G.S., et al.: Stochastic activation pruning for robust adversarial defense. In: International Conference on Learning Representations (ICLR) (2018)
[9]
Diehl, P.U., Neil, D., Binas, J., Cook, M., Liu, S.C., Pfeiffer, M.: Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2015)
[10]
Eykholt, K., et al.: Robust physical world attacks on deep learning visual classifications. In: CVPR (2018)
[11]
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015)
[12]
Guo, C., Rana, M., Cisse, M., van der Maaten, L.: Countering adversarial images using input transformations (2018)
[13]
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. Technical report, Microsoft Research, arXiv: 1512.03385 (2015)
[14]
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of International Conference on Machine Learning (ICML), pp. 448–456 (2015)
[15]
Kingma, D.P., Ba., J.L.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (ICLR) (2014)
[16]
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: International Conference on Learning Representations (ICLR) (2017)
[17]
Lee, C.Y., Xie, S., Gallagher, P., Zhang, Z., Tu, Z.: Deeply-supervised nets (2014)
[18]
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (ICLR) (2018)
[19]
Marchisio, A., Nanfa, G., Khalid, F., Hanif, M.A., Martina, M., Shafique, M.: SNN under attack: are spiking deep belief networks vulnerable to adversarial examples? (2019)
[20]
Olin-Ammentorp, W., Beckmann, K., Schuman, C.D., Plank, J.S., Cady, N.C.: Stochasticity and robustness in spiking neural networks (2019)
[21]
Panda P, Chakraborty I, and Roy K Discretization based solutions for secure machine learning against adversarial attacks IEEE Access 2019 7 70157-70168
[22]
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597 (2016)
[23]
Rathi, N., Srinivasan, G., Panda, P., Roy, K.: Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation. In: International Conference on Learning Representations (ICLR) (2020)
[24]
Sengupta A, Ye Y, Wang R, Liu C, and Roy K Going deeper in spiking neural networks: VGG and residual architectures Front. Neurosci. 2019 13 95
[25]
Sharmin, S., Panda, P., Sarwar, S.S., Lee, C., Ponghiran, W., Roy, K.: A comprehensive analysis on adversarial robustness of spiking neural networks. In: 2019 International Joint Conference on Neural Networks (IJCNN) (2019)
[26]
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICLR) (2015)
[27]
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR) (2014)
[28]
Tramr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. In: International Conference on Learning Representations (ICLR) (2018)
[29]
Werbos PJ Backpropagation through time: what it does and how to do it Proc. IEEE 1990 78 10 1550-1560
[30]
Wu Y, Deng L, Li G, Zhu J, and Shi L Spatio-temporal backpropagation for training high-performance spiking neural networks Front. Neurosci. 2018 12 331
[31]
Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks. In: Network and Distributed Systems Security Symposium (NDSS) (2018)
[32]
Xu, W., Qi, Y., Evans, D.: Automatically evading classifiers. In: Network and Distributed Systems Security Symposium (NDSS) (2016)
[33]
Zenke F and Ganguli S Superspike: Supervised learning in multilayer spiking neural networks Neural Comput. 2018 30 6 1514-1541

Cited By

View all
  • (2025)A.I. Robustness: a Human-Centered Perspective on Technological Challenges and OpportunitiesACM Computing Surveys10.1145/366592657:6(1-38)Online publication date: 10-Feb-2025
  • (2024)Enhancing adversarial robustness in SNNs with sparse gradientsProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693309(30738-30754)Online publication date: 21-Jul-2024
  • (2024)Robust stable spiking neural networksProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692508(11016-11029)Online publication date: 21-Jul-2024
  • Show More Cited By

Index Terms

  1. Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-linear Activations
              Index terms have been assigned to the content through auto-classification.

              Recommendations

              Comments

              Please enable JavaScript to view thecomments powered by Disqus.

              Information & Contributors

              Information

              Published In

              cover image Guide Proceedings
              Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIX
              Aug 2020
              843 pages
              ISBN:978-3-030-58525-9
              DOI:10.1007/978-3-030-58526-6

              Publisher

              Springer-Verlag

              Berlin, Heidelberg

              Publication History

              Published: 23 August 2020

              Author Tags

              1. Spiking neural networks
              2. Adversarial attack
              3. Leaky-integrate-fire neuron
              4. Input discretization

              Qualifiers

              • Article

              Contributors

              Other Metrics

              Bibliometrics & Citations

              Bibliometrics

              Article Metrics

              • Downloads (Last 12 months)0
              • Downloads (Last 6 weeks)0
              Reflects downloads up to 23 Feb 2025

              Other Metrics

              Citations

              Cited By

              View all
              • (2025)A.I. Robustness: a Human-Centered Perspective on Technological Challenges and OpportunitiesACM Computing Surveys10.1145/366592657:6(1-38)Online publication date: 10-Feb-2025
              • (2024)Enhancing adversarial robustness in SNNs with sparse gradientsProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3693309(30738-30754)Online publication date: 21-Jul-2024
              • (2024)Robust stable spiking neural networksProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3692508(11016-11029)Online publication date: 21-Jul-2024
              • (2024)RSC-SNN: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing CodingProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680639(2748-2756)Online publication date: 28-Oct-2024
              • (2024)XCrowd: Combining Explainability and Crowdsourcing to Diagnose Models in Relation ExtractionProceedings of the 33rd ACM International Conference on Information and Knowledge Management10.1145/3627673.3679777(2097-2107)Online publication date: 21-Oct-2024
              • (2023)Impact of Noisy Input on Evolved Spiking Neural Networks for Neuromorphic SystemsProceedings of the 2023 Annual Neuro-Inspired Computational Elements Conference10.1145/3584954.3584969(52-56)Online publication date: 11-Apr-2023
              • (2023)Exploring Neuromorphic Computing Based on Spiking Neural Networks: Algorithms to HardwareACM Computing Surveys10.1145/357115555:12(1-49)Online publication date: 2-Mar-2023
              • (2022)SNN-RATProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3602067(24780-24793)Online publication date: 28-Nov-2022
              • (2022)Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive CrossbarsProceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design10.1145/3531437.3539729(1-6)Online publication date: 1-Aug-2022
              • (2022)A cross-layer approach to cognitive computingProceedings of the 59th ACM/IEEE Design Automation Conference10.1145/3489517.3530642(1327-1330)Online publication date: 10-Jul-2022
              • Show More Cited By

              View Options

              View options

              Figures

              Tables

              Media

              Share

              Share

              Share this Publication link

              Share on social media