Improving the Security of Audio CAPTCHAs With Adversarial Examples
Abstract
Index Terms
- Improving the Security of Audio CAPTCHAs With Adversarial Examples
Recommendations
Boosting the transferability of adversarial CAPTCHAs
AbstractCompletely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a test to distinguish humans and computers. Since attackers can achieve high accuracy in recognizing the CAPTCHAs using deep learning models, geometric ...
Highlights- We emphasize the importance of improving the transferability of adversarial CAPTCHAs, as it has not been discussed before.
- We propose CFA, a method for generating more transferable adversarial CAPTCHAs by altering the robust content ...
Resisting Adversarial Examples via Wavelet Extension and Denoising
Smart Computing and CommunicationAbstractIt is well known that Deep Neural Networks are vulnerable to adversarial examples. An adversary can inject carefully-crafted perturbations on clean input to manipulate the model output. In this paper, we propose a novel method, WED (Wavelet ...
Generating adversarial examples with adversarial networks
IJCAI'18: Proceedings of the 27th International Joint Conference on Artificial IntelligenceDeep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Publisher
IEEE Computer Society Press
Washington, DC, United States
Publication History
Qualifiers
- Research-article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0