Nothing Special   »   [go: up one dir, main page]

×
Please click here if you are not redirected within a few seconds.
Jul 3, 2024 · Abstract. Adversarial examples have shown a powerful ability to make a well-trained model misclassified. Current mainstream adversarial ...
Jul 3, 2024 · In this paper, we propose a novel L_p-norm distortion-efficient adversarial attack, which not only owns the least L_2-norm loss but also significantly reduces ...
Missing: Lp- | Show results with:Lp-
Jul 3, 2024 · This paper proposes a new adversarial attack method called "𝐿_𝑝-norm Distortion-Efficient Adversarial Attack" that can generate adversarial ...
Aug 7, 2024 · Lp-norm Distortion-Efficient Adversarial Attack. CoRR abs/2407.03115 (2024). a service of Schloss Dagstuhl - Leibniz Center for Informatics.
Missing: Lp- | Show results with:Lp-
Adversarial examples have shown a powerful ability to make a well-trained model misclassified. Current mainstream adversarial attack methods only consider ...
Missing: Lp- | Show results with:Lp-
In this paper, we introduce a novel adversarial attack algorithm, NA-FGTM. Our method employs the Tanh activation function instead of the sign which can ...
Missing: Lp- | Show results with:Lp-
In this paper, we aim at a non-AT defense: How to design a defense method that gets rid of AT but is still robust against strong adversarial attacks?
$L_p$-norm Distortion-Efficient Adversarial Attack. L_p ... On J-GLOBAL, this item will be available after more than half a year after the record posted.
Missing: Lp- | Show results with:Lp-
Abstract. Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack.
The evaluation of robustness against adversarial manipulation of neural networks-based classifiers is mainly tested with empirical attacks as methods.