Nothing Special   »   [go: up one dir, main page]

×
Please click here if you are not redirected within a few seconds.
Jan 11, 2022 · A robustness metric with a rigorous statistical guarantee is introduced to measure the quantification of adversarial examples, which indicates ...
Sep 17, 2023 · A robustness metric with a rigorous statistical guarantee is introduced to measure the quantification of adversarial examples, which indicates ...
Sep 18, 2023 · A robustness metric with a rigorous statistical guarantee is introduced to measure the quantification of adversarial examples, which indicates ...
A robustness metric with a rigorous statistical guarantee is introduced to measure the quantification of adversarial examples, which indicates the model's ...
To evaluate models' robustness to these transformations, we mea- sure accuracy on adversarially chosen word substitutions applied to test examples. Our. IBP- ...
Missing: Quantifying | Show results with:Quantifying
Sep 3, 2019 · This paper considers one exponentially large family of label-preserving transformations, in which every word in the input can be replaced with a similar word.
Missing: Quantifying | Show results with:Quantifying
In this paper, we propose WordDP to achieve cer- tified robustness against word substitution at- tacks in text classification via differential pri- vacy (DP).
This paper trains the first models that are provably robust to all word substitutions in this exponentially large family of label-preserving transformations ...
Missing: Quantifying | Show results with:Quantifying
State-of-the-art NLP models can often be fooled by adversaries that apply seemingly innocuous label-preserving transformations.
Missing: Quantifying | Show results with:Quantifying
Robustness against word substitutions has a well-defined and widely acceptable form, i.e., using semantically similar words as substitutions, and thus it is ...