Nothing Special   »   [go: up one dir, main page]

Semantically Equivalent Adversarial Rules for Debugging NLP models

Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin


Abstract
Complex machine learning models for NLP are often brittle, making different predictions for input instances that are extremely similar semantically. To automatically detect this behavior for individual instances, we present semantically equivalent adversaries (SEAs) – semantic-preserving perturbations that induce changes in the model’s predictions. We generalize these adversaries into semantically equivalent adversarial rules (SEARs) – simple, universal replacement rules that induce adversaries on many instances. We demonstrate the usefulness and flexibility of SEAs and SEARs by detecting bugs in black-box state-of-the-art models for three domains: machine comprehension, visual question-answering, and sentiment analysis. Via user studies, we demonstrate that we generate high-quality local adversaries for more instances than humans, and that SEARs induce four times as many mistakes as the bugs discovered by human experts. SEARs are also actionable: retraining models using data augmentation significantly reduces bugs, while maintaining accuracy.
Anthology ID:
P18-1079
Volume:
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2018
Address:
Melbourne, Australia
Editors:
Iryna Gurevych, Yusuke Miyao
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
856–865
Language:
URL:
https://aclanthology.org/P18-1079
DOI:
10.18653/v1/P18-1079
Bibkey:
Cite (ACL):
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically Equivalent Adversarial Rules for Debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856–865, Melbourne, Australia. Association for Computational Linguistics.
Cite (Informal):
Semantically Equivalent Adversarial Rules for Debugging NLP models (Ribeiro et al., ACL 2018)
Copy Citation:
PDF:
https://aclanthology.org/P18-1079.pdf
Note:
 P18-1079.Notes.pdf
Presentation:
 P18-1079.Presentation.pdf
Video:
 https://aclanthology.org/P18-1079.mp4
Code
 marcotcr/sears