Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1109/MILCOM.2016.7795300guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
research-article

Crafting adversarial input sequences for recurrent neural networks

Published: 01 November 2016 Publication History

Abstract

Machine learning models are frequently used to solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles or predicting financial market behaviors. Previous efforts have shown that numerous machine learning models are vulnerable to adversarial manipulations of their inputs taking the form of adversarial samples. Such inputs are crafted by adding carefully selected perturbations to legitimate inputs so as to force the machine learning model to misbehave, for instance by outputting a wrong class if the machine learning task of interest is classification. In fact, to the best of our knowledge, all previous work on adversarial samples crafting for neural networks considered models used to solve classification tasks, most frequently in computer vision applications. In this paper, we investigate adversarial input sequences for recurrent neural networks processing sequential data. We show that the classes of algorithms introduced previously to craft adversarial samples misclassified by feed-forward neural networks can be adapted to recurrent neural networks. In a experiment, we show that adversaries can craft adversarial sequences misleading both categorical and sequential recurrent neural networks.

References

[1]
C. Szegedy et al., “Intriguing properties of neural networks,” in Proceedings of the 2014 International Conference on Learning Representations. Computational and Biological Learning Society, 2014.
[2]
I. J. Goodfellow et al., “Explaining and harnessing adversarial examples,” in Proceedings of the 2015 International Conference on Learning Representations. Computational and Biological Learning Society, 2015.
[3]
N. Papernot et al., “The limitations of deep learning in adversarial settings,” in Proceedings of the 1st IEEE European Symposium on Security and Privacy. 2016.
[4]
N. Papernot et al., “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv:1602.02697, 2016.
[5]
N. Papernot et al., “Distillation as a defense to adversarial perturbations against deep neural networks,” in Proceedings of the 37th IEEE Symposium on Security and Privacy. 2016.
[6]
D. Warde-Farley et al., “Adversarial perturbations of deep neural networks,” in Perturbation, Optimization, and Statistics, T. Hazan, G. Papandreou, and D. Tarlow, Eds., MIT Press, 2016.
[7]
P. McDaniel et al., “Machine Learning in Adversarial Settings,” IEEE Security & Privacy, vol. 14, no. 3, May/June 2016.
[8]
D. E. Rumelhart et al., “Learning representations by back-propagating errors,” Cognitive modeling, vol. 5, 1988.
[9]
R. Pascanu et al., “Malware classification with recurrent networks,” in Proc of the IEEE ICASSP. 2015, pp. 1916–1920.
[10]
I. Goodfellow et al., Deep learning, 2016, MIT Press.
[11]
K. P. Murphy, Machine learning: a probabilistic perspective. MIT press, 2012.
[12]
A. Krizhevsky et al., “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, 2012, pp. 1097–1105.
[13]
G. E. Dahl et al., “Large-scale malware classification using random projections and neural networks,” in Proc. of the IEEE ICASSP. 2013, pp. 3422–3426.
[14]
D. Cireşan et al., “Multi-column deep neural network for traffic sign classification,” Neural Networks, vol. 32, pp. 333–338, 2012.
[15]
C. M. Bishop, “Pattern recognition,” Machine Learning, Springer, 2006.
[16]
D. A. Freedman, Statistical models: theory and practice. Cambridge University Press, 2009.
[17]
I. Goodfellow et al., “Multi-prediction deep Boltzmann machines,” in Advances in Neural Information Processing Systems, 2013, pp. 548–556.
[18]
M. C. Mozer, “A focused back-propagation algorithm for temporal pattern recognition,” Complex systems, vol. 3, no. 4, pp. 349–381, 1989.
[19]
P. J. Werbos, “Generalization of backpropagation with application to a recurrent gas market model,” Neural Networks, vol. 1(4), pp. 339–356, 1988.
[20]
S. Hochreiter et al., “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[21]
J. Bergstra et al., “Theano: a CPU and GPU math expression compiler,” in Proceedings of the Python for scientific computing conference (SciPy), vol. 4. Austin, TX, 2010, p. 3.
[22]
A. L. Maas et al., “Learning word vectors for sentiment analysis,” in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, Oregon, USA, 2011, pp. 142–150.
[23]
G. E. Hinton, “Learning distributed representations of concepts,” in Proceedings of the eighth annual conference of the cognitive science society, vol. 1. Amherst, MA, 1986, p. 12.
[24]
G. Mesnil et al., “Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding.” in INTER-SPEECH, 2013, pp. 3771–3775.
[25]
M. Barreno et al., “Can machine learning be secure?” in Proceedings of the 2006 ACM Symposium on Information, computer and communications security. 2006, pp. 16–25.

Cited By

View all
  • (2024)Token-modification adversarial attacks for natural language processingAI Communications10.3233/AIC-23027937:4(655-676)Online publication date: 1-Jan-2024
  • (2024)Adversarial Attack and Robustness Improvement on Code SummarizationProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661173(17-27)Online publication date: 18-Jun-2024
  • (2024)Effective and Imperceptible Adversarial Textual Attack Via Multi-objectivizationACM Transactions on Evolutionary Learning and Optimization10.1145/36511664:3(1-23)Online publication date: 2-Mar-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
MILCOM 2016 - 2016 IEEE Military Communications Conference
1304 pages

Publisher

IEEE Press

Publication History

Published: 01 November 2016

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 25 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Token-modification adversarial attacks for natural language processingAI Communications10.3233/AIC-23027937:4(655-676)Online publication date: 1-Jan-2024
  • (2024)Adversarial Attack and Robustness Improvement on Code SummarizationProceedings of the 28th International Conference on Evaluation and Assessment in Software Engineering10.1145/3661167.3661173(17-27)Online publication date: 18-Jun-2024
  • (2024)Effective and Imperceptible Adversarial Textual Attack Via Multi-objectivizationACM Transactions on Evolutionary Learning and Optimization10.1145/36511664:3(1-23)Online publication date: 2-Mar-2024
  • (2024)Generating Adversarial Texts by the Universal Tail Word Addition AttackWeb and Big Data10.1007/978-981-97-7232-2_21(310-326)Online publication date: 31-Aug-2024
  • (2023)Annealing genetic-based preposition substitution for text rubbish example generationProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/569(5122-5130)Online publication date: 19-Aug-2023
  • (2023)Augmenting Feature Representation with Gradient Penalty for Robust Text CategorizationInternational Journal of Intelligent Systems10.1155/2023/73868882023Online publication date: 1-Jan-2023
  • (2023)Adversary for Social Good: Leveraging Adversarial Attacks to Protect Personal Attribute PrivacyACM Transactions on Knowledge Discovery from Data10.1145/361409818:2(1-24)Online publication date: 7-Aug-2023
  • (2023)Deep learning models for cloud, edge, fog, and IoT computing paradigmsComputer Science Review10.1016/j.cosrev.2023.10056849:COnline publication date: 1-Aug-2023
  • (2023)Reading is not believingComputers and Security10.1016/j.cose.2022.103052125:COnline publication date: 1-Feb-2023
  • (2022)Text Adversarial Attacks and DefensesSecurity and Communication Networks10.1155/2022/64584882022Online publication date: 1-Jan-2022
  • Show More Cited By

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media