Cited By
View all- Fan WZhao STang J(2025)Introduction for the Special Issue on Trustworthy Artificial IntelligenceACM Transactions on Knowledge Discovery from Data10.1145/371218419:2(1-6)Online publication date: 15-Jan-2025
Deep neural networks are vulnerable to adversarial examples that are crafted by imposing imperceptible changes to the inputs. However, these adversarial examples are most successful in white-box settings where the model and its parameters are available. ...
Deep neural networks are highly vulnerable to adversarial examples, and these adversarial examples stay malicious when transferred to other neural networks. Many works exploit this transferability of adversarial examples to execute black‐box ...
Although deep neural networks (DNNs) have advanced performance in many application scenarios, they are vulnerable to the attacks of adversarial examples that are crafted by adding imperceptible perturbations. Most of the existing adversarial ...
Association for Computing Machinery
New York, NY, United States
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in