Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3548606.3559335acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Public Access

Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation

Published: 07 November 2022 Publication History

Abstract

Targeted training-set attacks inject malicious instances into the training set to cause a trained model to mislabel one or more specific test instances. This work proposes the task of target identification, which determines whether a specific test instance is the target of a training-set attack. Target identification can be combined with adversarial-instance identification to find (and remove) the attack instances, mitigating the attack with minimal impact on other predictions. Rather than focusing on a single attack method or data modality, we build on influence estimation, which quantifies each training instance's contribution to a model's prediction. We show that existing influence estimators' poor practical performance often derives from their over-reliance on training instances and iterations with large losses. Our renormalized influence estimators fix this weakness; they far outperform the original estimators at identifying influential groups of training examples in both adversarial and non-adversarial settings, even finding up to 100% of adversarial training instances with no clean-data false positives. Target identification then simplifies to detecting test instances with anomalous influence values. We demonstrate our method's effectiveness on backdoor and poisoning attacks across various data domains, including text, vision, and speech, as well as against a gray-box, adaptive attacker that specifically optimizes the adversarial instances to evade our method. Our source code is available at https://github.com/ZaydH/target_identification.

Supplementary Material

MP4 File (CCS22-fp0009.mp4)
We introduce a Framework for Identifying Targets (FIT) of training-set attacks, which uses influence estimation to find both targets and the attacks themselves. FIT can achieve near-perfect target identification and is highly effective against even an adaptive attacker that attempts to specifically evade our method. The key to our success is a new influence estimation method, Gradient Aggregated Similarity (GAS), which builds on TracInCP but introduces an essential renormalization fix. This renormalization fix is also useful for other influence estimation tasks, and with other influence estimation methods, such as influence functions and representer points. Extended Version of the Paper: https://arxiv.org/abs/2201.10055

References

[1]
Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. 2019. Protecting World Leaders Against Deep Fakes. In Proceedings of the CVPR Workshop on Media Forensics. Long Beach, California.
[2]
Hojjat Aghakhani, Thorsten Eisenhofer, Lea Schönherr, Dorothea Kolossa, Thorsten Holz, Christopher Kruegel, and Giovanni Vigna. 2020. VENOMAVE: Clean-Label Poisoning Against Speech Recognition. CoRR (2020). arXiv:2010.10682 [cs.SD]
[3]
Vic Barnett and Toby Lewis. 1978. Outliers in Statistical Data (2nd edition ed.). John Wiley & Sons Ltd., Hoboken, New Jersey, USA.
[4]
Elnaz Barshan, Marc-Etienne Brunet, and Gintare Karolina Dziugaite. 2020. RelatIF: Identifying Explanatory Training Samples via Relative Influence. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS'20).
[5]
Samyadeep Basu, Phil Pope, and Soheil Feizi. 2021. Influence Functions in Deep Learning Are Fragile. In Proceedings of the 9th International Conference on Learning Representations (ICLR'21). Virtual Only.
[6]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning Attacks against Support Vector Machines. In Proceedings of the 29th International Conference on Machine Learning (ICML'12). PMLR, Edinburgh, Great Britain.
[7]
Avrim L. Blum and Ronald L. Rivest. 1992. Training a 3-node neural network is NP-complete. Neural Networks 5, 1 (1992), 117--127.
[8]
Jonathan Brophy, Zayd Hammoudeh, and Daniel Lowd. 2022. Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees. arXiv:2205.00359 [cs.LG] arXiv.
[9]
Bryant Chen, Wilka Carvalho, Nathalie Baracaldo, Heiko Ludwig, Benjamin Edwards, Taesung Lee, Ian Molloy, and Biplav Srivastava. 2019. Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering. In Proceedings of the AAAI Workshop on Artificial Intelligence Safety (SafeAI'19). Association for the Advancement of Artificial Intelligence, Honolulu, Hawaii, USA.
[10]
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv:1712.05526 [cs.CR]
[11]
Yuanyuan Chen, Boyang Li, Han Yu, Pengcheng Wu, and Chunyan Miao. 2021. HyDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI'21). Association for the Advancement of Artificial Intelligence, Virtual Only.
[12]
R. Dennis Cook and SanfordWeisberg. 1982. Residuals and Influence in Regression. Chapman and Hall, New York.
[13]
Christophe Croux and Peter J. Rousseeuw. 1992. A Class of High-Breakdown Scale Estimators Based on Subranges. Communications in Statistics - Theory and Methods 21, 7 (1992), 1935--1951.
[14]
Jesse Davis and Mark Goadrich. 2006. The Relationship Between Precision-Recall and ROC Curves. In Proceedings of the 23rd International Conference on Machine Learning (ICML'06). PMLR, Pittsburgh, Pennsylvania.
[15]
Bao Gia Doan, Ehsan Abbasnejad, and Damith C. Ranasinghe. 2020. Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems. In Proceedings of the 36th Annual Computer Security Applications Conference (ACSAC'2020). Association for Computing Machinery, Virtual Only.
[16]
Vitaly Feldman and Chiyuan Zhang. 2020. What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS'20). Curran Associates, Inc., Virtual Only.
[17]
Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojtek Czaja, and Tom Goldstein. 2021. Adversarial Examples Make Strong Poisons. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS'21). Curran Associates, Inc., Virtual Only.
[18]
Yansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C. Ranasinghe, and Surya Nepal. 2019. STRIP: A Defence against Trojan Attacks on Deep Neural Networks. In Proceedings of the 35th Annual Computer Security Applications Conference (ACSAC'19). Association for Computing Machinery, San Juan, Puerto Rico, USA.
[19]
Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, and Tom Goldstein. 2021. Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching. In Proceedings of the 9th International Conference on Learning Representations (ICLR'21). Virtual Only.
[20]
Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A. Wichmann. 2020. Shortcut Learning in Deep Neural Networks. Nature Machine Intelligence 2, 11 (2020), 665--673.
[21]
Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access 7 (2019), 47230--47244.
[22]
Chuan Guo, Tom Goldstein, Awni Y. Hannun, and Laurens van der Maaten. 2020. Certified Data Removal from Machine Learning Models. In Proceedings of the 37th International Conference on Machine Learning (ICML'20, Vol. 119). 3832--3842.
[23]
Zayd Hammoudeh and Daniel Lowd. 2022. Reducing Certified Regression to Certified Classification. arXiv:2208.13904 [cs.LG]
[24]
Satoshi Hara, Atsushi Nitanda, and Takanori Maehara. 2019. Data Cleansing for Models Trained with SGD. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS'19). Curran Associates, Inc., Vancouver, Canada.
[25]
Victoria J. Hodge and Jim Austin. 2004. A Survey of Outlier Detection Methodologies. Artificial Intelligence Review 22, 2 (Oct 2004), 85--126.
[26]
W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, and Tom Goldstein. 2020. MetaPoison: Practical General-purpose Clean-label Data Poisoning. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS'20). Curran Associates, Inc., Virtual Only.
[27]
Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, and Alina Oprea. 2021. Subpopulation Data Poisoning Attacks. In Proceedings of the 28th ACM SIGSAC Conference on Computer and Communications Security (CCS '21). Association for Computing Machinery, Virtual Only.
[28]
Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. 2021. Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI'21).
[29]
Diederik P. Kingma and Jimmy Ba. 2015. Adam:A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR'15).
[30]
Pang Wei Koh and Percy Liang. 2017. Understanding Black-box Predictions via Influence Functions. In Proceedings of the 34th International Conference on Machine Learning (ICML'17). PMLR, Sydney, Australia.
[31]
Pang Wei Koh, Jacob Steinhardt, and Percy Liang. 2018. Stronger Data Poisoning Attacks Break Data Sanitization Defenses. arXiv:1811.00741 [cs.LG]
[32]
Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, and Heiko Hoffmann. 2019. Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs. In Proceedings of the 32nd Conference on Computer Vision and Pattern Recognition (CVPR'19). Long Beach, California, USA.
[33]
Ram Shankar Siva Kumar, Magnus Nyström, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, and Sharon Xia. 2020. Adversarial Machine Learning -- Industry Perspectives. In Proceedings of the 2020 IEEE Security and Privacy Workshops (SPW'20).
[34]
Peter Lee. 2016. Learning from Tay's Introduction. https://blogs.microsoft.com/ blog/2016/03/25/learning-tays-introduction/
[35]
Kirill Levchenko, Andreas Pitsillidis, Neha Chachra, Brandon Enright, Márk Félegyházi, Chris Grier, Tristan Halvorson, Chris Kanich, Christian Kreibich, He Liu, Damon McCoy, Nicholas C. Weaver, Vern Paxson, Geoffrey M. Voelker, and Stefan Savage. 2011. Click Trajectories: End-to-End Analysis of the Spam Value Chain. 2011 IEEE Symposium on Security and Privacy (2011), 431--446.
[36]
Alexander Levine and Soheil Feizi. 2021. Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks. In Proceedings of the 9th International Conference on Learning Representations (ICLR'21). Virtual Only.
[37]
Yige Li, Xixiang Lyu, Nodens Koren, Lingjuan Lyu, Bo Li, and Xingjun Ma. 2021. Anti-Backdoor Learning: Training Clean Models on Poisoned Data. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS'21). Curran Associates, Inc., Virtual Only.
[38]
Yiming Li, BaoyuanWu, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. 2020. Backdoor Learning: A Survey. arXiv:2007.08745 [cs.CR]
[39]
Junyu Lin, Lei Xu, Yingqi Liu, and Xiangyu Zhang. 2020. Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security (CCS'20). Association for Computing Machinery, Virtual Only.
[40]
Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2018. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. In Proceedings of the International Symposium on Research in Attacks, Intrusions, and Defenses (RAID'18). Springer, Heraklion, Crete, Greece, 273--294.
[41]
Yingqi Liu, Shiqing Ma, Yousra Aafer,Wen-Chuan Lee, Juan Zhai,WeihangWang, and Xiangyu Zhang. 2018. Trojaning Attack on Neural Networks. In Proceedings of the 25th Annual Network and Distributed System Security Symposium (NDSS'18). San Diego, California, USA.
[42]
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2020. RoBERTa: A Robustly Optimized BERT Pretraining Approach. In Proceedings of the 8th International Conference on Learning Representations (ICLR'20). Virtual Only.
[43]
Neil G. Marchant, Benjamin I. P. Rubinstein, and Scott Alfeld. 2022. Hard to Forget: Poisoning Attacks on Certified Machine Unlearning. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI'22).
[44]
Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. 2017. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security (AISec'17). Association for Computing Machinery, Dallas, Texas, USA.
[45]
David Page. 2020. How to Train Your ResNet. https://myrtle.ai/learn/how-totrain-your-resnet/
[46]
Barak A. Pearlmutter. 1994. Fast Exact Multiplication by the Hessian. Neural Computation 6 (1994), 147--160.
[47]
Neehar Peri, Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, and John P. Dickerson. 2020. Deep k-NN Defense Against Cleanlabel Data Poisoning Attacks. In Proceedings of the ECCVWorkshop on Adversarial Robustness in the Real World (AROW'20). Virtual Only.
[48]
James Pita, Manish Jain, Fernando Ordóñez, Christopher Portway, Milind Tambe, Craig Western, Praveen Paruchuri, and Sarit Kraus. 2009. Using Game Theory for Los Angeles Airport Security. AI Magazine 30 (2009), 43--57.
[49]
Garima Pruthi, Frederick Liu, Satyen Kale, and Mukund Sundararajan. 2020. Estimating Training Data Influence by Tracing Gradient Descent. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS'20). Curran Associates, Inc., Virtual Only.
[50]
Peter Rousseeuw and Christophe Croux. 1993. Alternatives to the Median Absolute Deviation. J. Amer. Statist. Assoc. (1993).
[51]
Peter J. Rousseeuw and Mia Hubert. 2017. Anomaly Detection by Robust Statistics. WIREs Data Mining and Knowledge Discovery 8, 2 (Nov 2017).
[52]
P. J. Rousseeuw and A. M. Leroy. 1987. Robust Regression and Outlier Detection. John Wiley & Sons, Inc., USA.
[53]
Sambuddha Saha, Aashish Kumar, Pratyush Sahay, George Jose, Srinivas Kruthiventi, and Harikrishna Muralidhara. 2019. Attack Agnostic Statistical Method for Adversarial Detection. In Proceedings of the 1st ICCV Workshop on Statistical Deep Learning for Computer Vision. Seoul, Korea.
[54]
Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang Zhang. 2020. Dynamic Backdoor Attacks Against Machine Learning Models. CoRR (2020). arXiv:2003.03675 [cs.CR]
[55]
Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. 2018. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS'18). Curran Associates, Inc., Montreal, Canada.
[56]
Harshay Shah, Kaustav Tamuly, Aditi Raghunathan, Prateek Jain, and Praneeth Netrapalli. 2020. The Pitfalls of Simplicity Bias in Neural Networks. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS'20).
[57]
Richard Socher, Alex Perelygin, JeanWu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 8th Conference on Empirical Methods in Natural Language Processing (EMNLP'13).
[58]
Ezekiel O. Soremekun, Sakshi Udeshi, Sudipta Chattopadhyay, and Andreas Zeller. 2020. Exposing Backdoors in Robust Machine Learning Models. arXiv:2003.00865 [cs.LG]
[59]
Jacob Steinhardt, Pang Wei Koh, and Percy Liang. 2017. Certified Defenses for Data Poisoning Attacks. In Proceedings of the 31st Conference on Neural Information Processing Systems (NeurIPS'17). Curran Associates, Inc., Long Beach, California, USA.
[60]
Brandon Tran, Jerry Li, and Aleksander Madry. 2018. Spectral Signatures in Backdoor Attacks. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS'18). Curran Associates, Inc., Montreal, Canada.
[61]
Sakshi Udeshi, Shanshan Peng, Gerald Woo, Lionell Loh, Louth Rawshan, and Sudipta Chattopadhyay. 2019. Model Agnostic Defence against Backdoor Attacks in Machine Learning. arXiv:1908.02203 [cs.LG]
[62]
Miguel Villarreal-Vasquez and Bharat K. Bhargava. 2020. ConFoc: Content-Focus Protection Against Trojan Attacks on Neural Networks. arXiv:2007.00711 [cs.CV]
[63]
Eric Wallace, Tony Z. Zhao, Shi Feng, and Sameer Singh. 2021. Concealed Data Poisoning Attacks on NLP Models. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL'21).
[64]
Binghui Wang, Xiaoyu Cao, Jinyuan Jai, and Neil Zhenqiang Gong. 2020. On Certifying Robustness against Backdoor Attacks via Randomized Smoothing. In Proceedings of the CVPR Workshop on Adversarial Machine Learning in Computer Vision.
[65]
Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y. Zhao. 2019. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. In Proceedings of the 40th IEEE Symposium on Security and Privacy (SP'19). San Francisco, CA.
[66]
Wenxiao Wang, Alexander Levine, and Soheil Feizi. 2022. Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation. In Proceedings of the 39th International Conference on Machine Learning (ICML'22).
[67]
Maurice Weber, Xiaojun Xu, Bojan Karla?, Ce Zhang, and Bo Li. 2021. RAB: Provable Robustness Against Backdoor Attacks. arXiv:2003.08904 [cs.LG]
[68]
Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. 2015. Is Feature Selection Secure against Training Data Poisoning?. In Proceedings of the 32nd International Conference on Machine Learning (ICML'15). PMLR, Lille, France.
[69]
Xiaojun Xu, Qi Wang, Huichen Li, Nikita Borisov, Carl A. Gunter, and Bo Li. 2021. Detecting AI Trojans Using Meta Neural Analysis. In Proceedings of the 42nd IEEE Symposium on Security and Privacy (SP'21). IEEE, Virtual Only.
[70]
Chih-Kuan Yeh, Joon Sik Kim, Ian E.H. Yen, and Pradeep Ravikumar. 2018. Representer Point Selection for Explaining Deep Neural Networks. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS'18). Curran Associates, Inc., Montreal, Canada.
[71]
Da Yu, Huishuai Zhang,Wei Chen, Jian Yin, and Tie-Yan Liu. 2021. Indiscriminate Poisoning Attacks are Shortcuts. arXiv:2111.00898 [cs.LG]
[72]
Rui Zhang and Shihua Zhang. 2022. Rethinking Influence Functions of Neural Networks in the Over-Parameterized Regime. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI'22). Association for the Advancement of Artificial Intelligence, Vancouver, Canada.
[73]
Chen Zhu, W Ronny Huang, Ali Shafahi, Hengduo Li, Gavin Taylor, Christoph Studer, and Tom Goldstein. 2019. Transferable Clean-Label Poisoning Attacks on Deep Neural Nets. In Proceedings of the 36th International Conference on Machine Learning (ICML'19). PMLR, Los Angeles, CA.
[74]
Liuwan Zhu, Rui Ning, Cong Wang, Chunsheng Xin, and Hongyi Wu. 2020. GangSweep: Sweep out Neural Backdoors by GAN. In Proceedings of the 28th ACM International Conference on Multimedia (MM'20).
[75]
Liuwan Zhu, Rui Ning, Chunsheng Xin, Chonggang Wang, and Hongyi Wu. 2021. CLEAR: Clean-up Sample-Targeted Backdoor in Neural Networks. In Proceedings of the 18th International Conference on Computer Vision (ICCV'21).

Cited By

View all
  • (2024)Need for Speed: Taming Backdoor Attacks with Speed and Precision2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00216(1217-1235)Online publication date: 19-May-2024
  • (2024)Proxima: A Proxy Model-Based Approach to Influence Analysis2024 IEEE International Conference on Artificial Intelligence Testing (AITest)10.1109/AITest62860.2024.00016(64-72)Online publication date: 15-Jul-2024
  • (2024)Training data influence analysis and estimation: a surveyMachine Language10.1007/s10994-023-06495-7113:5(2351-2403)Online publication date: 29-Mar-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CCS '22: Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
November 2022
3598 pages
ISBN:9781450394505
DOI:10.1145/3548606
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 November 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. backdoor attack
  2. data poisoning
  3. gas
  4. influence estimation
  5. influence functions
  6. representer point
  7. target identification
  8. tracin

Qualifiers

  • Research-article

Funding Sources

Conference

CCS '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

Upcoming Conference

CCS '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)292
  • Downloads (Last 6 weeks)27
Reflects downloads up to 22 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Need for Speed: Taming Backdoor Attacks with Speed and Precision2024 IEEE Symposium on Security and Privacy (SP)10.1109/SP54263.2024.00216(1217-1235)Online publication date: 19-May-2024
  • (2024)Proxima: A Proxy Model-Based Approach to Influence Analysis2024 IEEE International Conference on Artificial Intelligence Testing (AITest)10.1109/AITest62860.2024.00016(64-72)Online publication date: 15-Jul-2024
  • (2024)Training data influence analysis and estimation: a surveyMachine Language10.1007/s10994-023-06495-7113:5(2351-2403)Online publication date: 29-Mar-2024
  • (2023)Reducing Certified Regression to Certified Classification for General Poisoning Attacks2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)10.1109/SaTML54575.2023.00040(484-523)Online publication date: Feb-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media