Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3461353.3461375acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiciaiConference Proceedingsconference-collections
research-article

Adversarial Examples Generation And Attack On SAR Image Classification

Published: 04 September 2021 Publication History

Abstract

It has been demonstrated that deep learning models are vulnerable to adversarial examples, and most existing algorithms can generate adversarial examples to attack image classification or recognition models trained from target datasets with visible image such as ImageNet, PASCAL VOC and COCO. In order to expand the image style of adversarial examples and decide whether the attacking specialty exists in SAR images, the MI-FGSM and AdvGAN algorithms are introduced to generate adversarial examples, and the attacks for SAR image classification are executed in this paper. The experimental results show that the adversarial example is also definitely aggressive to SAR image, and the attack success rate is affected by the structure of deep learning networks training the target models and the category of the image.

References

[1]
Szegedy C, Zaremba W, Sutskever I, Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013.
[2]
Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples[J]. arXiv preprint arXiv:1412.6572, 2014.
[3]
Moosavi-Dezfooli S M, Fawzi A, Frossard P. Deepfool: a simple and accurate method to fool deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2574-2582.
[4]
Papernot N, McDaniel P, Jha S, The limitations of deep learning in adversarial settings[C]//2016 IEEE European symposium on security and privacy (EuroS&P). IEEE, 2016: 372-387.
[5]
Carlini N, Wagner D. Towards evaluating the robustness of neural networks[C]//2017 ieee symposium on security and privacy (sp). IEEE, 2017: 39-57.
[6]
Song D, Eykholt K, Evtimov I, Physical adversarial examples for object detectors[C]//12th {USENIX} Workshop on Offensive Technologies ({WOOT} 18). 2018.
[7]
Lee M, Kolter Z. On physical adversarial patches for object detection[J]. arXiv preprint arXiv:1906.11897, 2019.
[8]
Chen S T, Cornelius C, Martin J, Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector[C]//Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Cham, 2018: 52-68.
[9]
Chen P Y, Zhang H, Sharma Y, Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models[C]//Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 2017: 15-26.
[10]
Papernot N, McDaniel P, Goodfellow I, Practical black-box attacks against machine learning[C]//Proceedings of the 2017 ACM on Asia conference on computer and communications security. 2017: 506-519.
[11]
Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models[J]. arXiv preprint arXiv:1712.04248, 2017.
[12]
Dong Y, Liao F, Pang T, Boosting adversarial attacks with momentum[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 9185-9193.
[13]
C. W. Xiao, B. Li, J. Y. Zhu, W. He, M. Y. Liu, D. Song.Generating adversarial examples with adversarial net-works. ArXiv: 1801.02610, 2018.
[14]
Wiyatno R R, Xu A, Dia O, Adversarial Examples in Modern Machine Learning: A Review[J]. arXiv preprint arXiv:1911.05268, 2019.
[15]
KRIZHEVSKY A, SUTSKEVER I, HINTON G. ImageNet classification with deep convolutional neural networks[J]. Advances in Neural Information Processing Systems, 2012, 25(2):1097-1105.
[16]
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[17]
Kurakin A, Goodfellow I, Bengio S. Adversarial machine learning at scale[J]. arXiv preprint arXiv:1611.01236, 2016.
[18]
Szegedy C, Liu W, Jia Y, Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9.
[19]
Szegedy C, Vanhoucke V, Ioffe S, Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818-2826.

Cited By

View all
  • (2023)SAR Image Ship Target Detection Adversarial Attack and Defence Generalization ResearchSensors10.3390/s2304226623:4(2266)Online publication date: 17-Feb-2023

Index Terms

  1. Adversarial Examples Generation And Attack On SAR Image Classification
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ICIAI '21: Proceedings of the 2021 5th International Conference on Innovation in Artificial Intelligence
    March 2021
    246 pages
    ISBN:9781450388634
    DOI:10.1145/3461353
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 04 September 2021

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Adversarial attack
    2. Adversarial example
    3. Deeping learning Network
    4. SAR image

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    ICIAI 2021

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)25
    • Downloads (Last 6 weeks)3
    Reflects downloads up to 12 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)SAR Image Ship Target Detection Adversarial Attack and Defence Generalization ResearchSensors10.3390/s2304226623:4(2266)Online publication date: 17-Feb-2023

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media