Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3534678.3542608acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
abstract
Public Access

Towards Adversarial Learning: From Evasion Attacks to Poisoning Attacks

Published: 14 August 2022 Publication History

Abstract

Although deep neural networks (DNNs) have been successfully deployed in various real-world application scenarios, recent studies demonstrated that DNNs are extremely vulnerable to adversarial attacks. By introducing visually imperceptible perturbations into benign inputs, the attacker can manipulate a DNN model into providing wrong predictions. For practitioners who are applying DNNs into real-world problems, understanding the characteristics of different kinds of attacks will not only help them improve the robustness of their models, but also can help them have deeper insights into the working mechanism of DNNs. In this tutorial, we provide a comprehensive overview of the recent advances of adversarial learning, including both attack methods and defense methods. Specifically, we first give a detailed introduction of various types of evasion attacks, followed by a series of representative defense methods against evasion attacks. We then discuss different poisoning attack methods, followed by several defense methods against poisoning attacks. In addition, besides introducing attack methods working in the digital setting, we also introduce attack methods designed for threatening physical world systems. Finally, we present DeepRobust, a PyTorch adversarial learning library which aims to build a comprehensive and easy-to-use platform to foster this research field. Via our tutorial, audience can grasp the main ideas of adversarial attacks and defenses and obtain a deep insight of the robustness of DNNs. The tutorial official website is available at https://sites.google.com/view/kdd22-tutorial-adv-learn/.

References

[1]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE, 39--57.
[2]
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. 2017. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security. 15--26.
[3]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
[4]
Junfeng Guo and Cong Liu. 2020. Practical poisoning attacks on neural networks. In European Conference on Computer Vision. Springer, 142--158.
[5]
Yaxin Li, Wei Jin, Han Xu, and Jiliang Tang. 2020. Deeprobust: A pytorch library for adversarial attacks and defenses. arXiv preprint arXiv:2005.06149 (2020).
[6]
Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. 2018. Poison frogs! targeted clean-label poisoning attacks on neural networks. Advances in neural information processing systems 31 (2018).

Cited By

View all
  • (2023)Adversarial Attacks on Large Language Model-Based System and Mitigating StrategiesSecurity and Communication Networks10.1155/2023/86910952023Online publication date: 1-Jan-2023

Index Terms

  1. Towards Adversarial Learning: From Evasion Attacks to Poisoning Attacks

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    KDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
    August 2022
    5033 pages
    ISBN:9781450393850
    DOI:10.1145/3534678
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 14 August 2022

    Check for updates

    Author Tags

    1. adversarial learning
    2. deep neural networks
    3. robustness

    Qualifiers

    • Abstract

    Funding Sources

    • CNS
    • ARO
    • IIS

    Conference

    KDD '22
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

    Upcoming Conference

    KDD '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)181
    • Downloads (Last 6 weeks)33
    Reflects downloads up to 14 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Adversarial Attacks on Large Language Model-Based System and Mitigating StrategiesSecurity and Communication Networks10.1155/2023/86910952023Online publication date: 1-Jan-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media