Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-642-40994-3_25guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Evasion attacks against machine learning at test time

Published: 23 September 2013 Publication History

Abstract

In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.

References

[1]
Adobe: PDF Reference, sixth edn. version 1.7
[2]
Barreno, M., Nelson, B., Sears, R., Joseph, A.D., Tygar, J.D.: Can machine learning be secure? In: ASIACCS 2006: Proc. of the 2006 ACM Symp. on Information, Computer and Comm. Security, pp. 16-25. ACM, New York (2006)
[3]
Biggio, B., Fumera, G., Roli, F.: Multiple classifier systems for robust classifier design in adversarial environments. Int'l J. of Machine Learning and Cybernetics 1(1), 27-41 (2010)
[4]
Biggio, B., Fumera, G., Roli, F.: Design of robust classifiers for adversarial environments. In: IEEE Int'l Conf. on Systems, Man, and Cybernetics (SMC), pp. 977-982 (2011)
[5]
Biggio, B., Fumera, G., Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. on Knowl. and Data Eng. 99(PrePrints), 1 (2013)
[6]
Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. In: Langford, J., Pineau, J. (eds.) 29th Int'l Conf. on Mach. Learn. (2012)
[7]
Brückner, M., Scheffer, T.: Stackelberg games for adversarial prediction problems. In: Knowl. Disc. and D. Mining (KDD), pp. 547-555 (2011)
[8]
Brückner, M., Kanzow, C., Scheffer, T.: Static prediction games for adversarial learning problems. J. Mach. Learn. Res. 13, 2617-2654 (2012)
[9]
Dalvi, N., Domingos, P., Mausam, S.S., Verma, D.: Adversarial classification. In: 10th ACM SIGKDD Int'l Conf. on Knowl. Discovery and Data Mining (KDD), pp. 99-108 (2004)
[10]
Dekel, O., Shamir, O., Xiao, L.: Learning to classify with missing and corrupted features. Mach. Learn. 81, 149-178 (2010)
[11]
Fogla, P., Sharif, M., Perdisci, R., Kolesnikov, O., Lee, W.: Polymorphic blending attacks. In: Proc. 15th Conf. on USENIX Sec. Symp. USENIX Association, CA (2006)
[12]
Globerson, A., Roweis, S.T.: Nightmare at test time: robust learning by feature deletion. In: Cohen, W.W., Moore, A. (eds.) Proc. of the 23rd Int'l Conf. on Mach. Learn., vol. 148, pp. 353-360. ACM (2006)
[13]
Golland, P.: Discriminative direction for kernel classifiers. In: Neu. Inf. Proc. Syst (NIPS), pp. 745-752 (2002)
[14]
Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B., Tygar, J.D.: Adversarial machine learning. In: 4th ACM Workshop on Art. Int. and Sec (AISec 2011), Chicago, IL, USA, pp. 43-57 (2011)
[15]
Kloft, M., Laskov, P.: Online anomaly detection under adversarial impact. In: Proc. of the 13th Int'l Conf. on Art. Int. and Stats (AISTATS), pp. 405-412 (2010)
[16]
Kolcz, A., Teo, C.H.: Feature weighting for improved classifier robustness. In: Sixth Conf. on Email and Anti-Spam (CEAS), Mountain View, CA, USA (2009)
[17]
Laskov, P., Kloft, M.: A framework for quantitative security analysis of machine learning. In: AISec 2009: Proc. of the 2nd ACM Works. on Sec. and Art. Int., pp. 1-4. ACM, New York (2009)
[18]
LeCun, Y., Jackel, L., Bottou, L., Brunot, A., Cortes, C., Denker, J., Drucker, H., Guyon, I., Müller, U., Säckinger, E., Simard, P., Vapnik, V.: Comparison of learning algorithms for handwritten digit recognition. In: Int'l Conf. on Art. Neu. Net., pp. 53-60 (1995)
[19]
Lowd, D., Meek, C.: Adversarial learning. In: Press, A. (ed.) Proc. of the Eleventh ACM SIGKDD Int'l Conf. on Knowl. Disc. and D. Mining (KDD), Chicago, IL, pp. 641-647 (2005)
[20]
Maiorca, D., Giacinto, G., Corona, I.: A pattern recognition system for malicious pdf files detection. In: MLDM, pp. 510-524 (2012)
[21]
Nelson, B., Barreno, M., Chi, F.J., Joseph, A.D., Rubinstein, B.I.P., Saini, U., Sutton, C., Tygar, J.D., Xia, K.: Exploiting machine learning to subvert your spam filter. In: LEET 2008: Proc. of the 1st USENIX Work. on L.-S. Exp. and Emerg. Threats, pp. 1-9. USENIX Association, Berkeley (2008)
[22]
Nelson, B., Rubinstein, B.I., Huang, L., Joseph, A.D., Lee, S.J., Rao, S., Tygar, J.D.: Query strategies for evading convex-inducing classifiers. J. Mach. Learn. Res. 13, 1293-1332 (2012)
[23]
Platt, J.: Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In: Smola, A., Bartlett, P., Schölkopf, B., Schuurmans, D. (eds.) Adv. in L. M. Class, pp. 61-74 (2000)
[24]
Smutz, C., Stavrou, A.: Malicious pdf detection using metadata and structural features. In: Proc. of the 28th Annual Comp. Sec. App. Conf., pp. 239-248 (2012)
[25]
Šrndić, N., Laskov, P.: Detection of malicious pdf files based on hierarchical document structure. In: Proc. 20th Annual Net. & Dist. Sys. Sec. Symp. (2013)
[26]
Young, R.: 2010 IBM X-force mid-year trend & risk report. Tech. rep., IBM (2010)

Cited By

View all
  • (2024)Facial Soft-biometrics Obfuscation through Adversarial AttacksACM Transactions on Multimedia Computing, Communications, and Applications10.1145/365647420:11(1-21)Online publication date: 12-Sep-2024
  • (2024)A Survey on Trustworthy Recommender SystemsACM Transactions on Recommender Systems10.1145/3652891Online publication date: 13-Apr-2024
  • (2024)Adversarial Transferability in Embedded Sensor Systems: An Activity Recognition PerspectiveACM Transactions on Embedded Computing Systems10.1145/3641861Online publication date: 22-Jan-2024
  • Show More Cited By
  1. Evasion attacks against machine learning at test time

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Guide Proceedings
      ECMLPKDD'13: Proceedings of the 2013th European Conference on Machine Learning and Knowledge Discovery in Databases - Volume Part III
      September 2013
      685 pages
      ISBN:9783642409936
      • Editors:
      • Hendrik Blockeel,
      • Kristian Kersting,
      • Siegfried Nijssen,
      • Filip Železný

      Sponsors

      • XRCE: Xerox Research Centre Europe
      • Winton Capital Management: Winton Capital Management
      • Cisco Systems
      • Yahoo! Labs
      • CSKI: Czech Society for Cybernetics and Informatics

      Publisher

      Springer-Verlag

      Berlin, Heidelberg

      Publication History

      Published: 23 September 2013

      Author Tags

      1. adversarial machine learning
      2. evasion attacks
      3. neural networks
      4. support vector machines

      Qualifiers

      • Article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 30 Sep 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Facial Soft-biometrics Obfuscation through Adversarial AttacksACM Transactions on Multimedia Computing, Communications, and Applications10.1145/365647420:11(1-21)Online publication date: 12-Sep-2024
      • (2024)A Survey on Trustworthy Recommender SystemsACM Transactions on Recommender Systems10.1145/3652891Online publication date: 13-Apr-2024
      • (2024)Adversarial Transferability in Embedded Sensor Systems: An Activity Recognition PerspectiveACM Transactions on Embedded Computing Systems10.1145/3641861Online publication date: 22-Jan-2024
      • (2024)The Path to Defence: A Roadmap to Characterising Data Poisoning Attacks on Victim ModelsACM Computing Surveys10.1145/362753656:7(1-39)Online publication date: 9-Apr-2024
      • (2024)Byzantine Machine Learning: A PrimerACM Computing Surveys10.1145/361653756:7(1-39)Online publication date: 9-Apr-2024
      • (2024)Secure and Trustworthy Artificial Intelligence-extended Reality (AI-XR) for MetaversesACM Computing Surveys10.1145/361442656:7(1-38)Online publication date: 9-Apr-2024
      • (2024)Efficient verification of neural networks based on neuron branching and LP abstractionNeurocomputing10.1016/j.neucom.2024.127936596:COnline publication date: 1-Sep-2024
      • (2024)LP-BFGS attackComputers and Security10.1016/j.cose.2024.103746140:COnline publication date: 1-May-2024
      • (2024)Exploring adversarial examples and adversarial robustness of convolutional neural networks by mutual informationNeural Computing and Applications10.1007/s00521-024-09774-z36:23(14379-14394)Online publication date: 1-Aug-2024
      • (2024)Adversarial robustness improvement for deep neural networksMachine Vision and Applications10.1007/s00138-024-01519-135:3Online publication date: 14-Mar-2024
      • Show More Cited By

      View Options

      View options

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media