Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3319535.3363201acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article
Public Access

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

Published: 06 November 2019 Publication History

Abstract

In a membership inference attack, an attacker aims to infer whether a data sample is in a target classifier's training dataset or not. Specifically, given a black-box access to the target classifier, the attacker trains a binary classifier, which takes a data sample's confidence score vector predicted by the target classifier as an input and predicts the data sample to be a member or non-member of the target classifier's training dataset. Membership inference attacks pose severe privacy and security threats to the training dataset. Most existing defenses leverage differential privacy when training the target classifier or regularize the training process of the target classifier. These defenses suffer from two key limitations: 1) they do not have formal utility-loss guarantees of the confidence score vectors, and 2) they achieve suboptimal privacy-utility tradeoffs. In this work, we propose MemGuard,the first defense with formal utility-loss guarantees against black-box membership inference attacks. Instead of tampering the training process of the target classifier, MemGuard adds noise to each confidence score vector predicted by the target classifier. Our key observation is that attacker uses a classifier to predict member or non-member and classifier is vulnerable to adversarial examples.Based on the observation, we propose to add a carefully crafted noise vector to a confidence score vector to turn it into an adversarial example that misleads the attacker's classifier. Specifically, MemGuard works in two phases. In Phase I, MemGuard finds a carefully crafted noise vector that can turn a confidence score vector into an adversarial example, which is likely to mislead the attacker's classifier to make a random guessing at member or non-member. We find such carefully crafted noise vector via a new method that we design to incorporate the unique utility-loss constraints on the noise vector. In Phase II, MemGuard adds the noise vector to the confidence score vector with a certain probability, which is selected to satisfy a given utility-loss budget on the confidence score vector. Our experimental results on three datasets show that MemGuard can effectively defend against membership inference attacks and achieve better privacy-utility tradeoffs than existing defenses. Our work is the first one to show that adversarial examples can be used as defensive mechanisms to defend against membership inference attacks.

Supplementary Material

WEBM File (p259-jia.webm)

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 308--318.
[2]
Giuseppe Ateniese, Giovanni Felici, Luigi V. Mancini, Angelo Spognardi, Antonio Villani, and Domenico Vitali. 2013. Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers. CoRR abs/1306.4447 (2013).
[3]
Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 2018 International Conference on Machine Learning (ICML). JMLR, 274--283.
[4]
Michael Backes, Pascal Berrang, Mathias Humbert, and Praveen Manoharan. 2016. Membership Privacy in MicroRNA-based Studies. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 319--330.
[5]
Michael Backes, Mathias Humbert, Jun Pang, and Yang Zhang. 2017. walk2friends: Inferring Social Links from Mobility Profiles. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1943--1957.
[6]
Raef Bassily, Adam Smith, and Abhradeep Thakurta. 2014. Differentially Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds. In Proceedings of the 2014 Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 464--473.
[7]
Xiang Cai, Xin Cheng Zhang, Brijesh Joshi, and Rob Johnson. 2012. Touching from a Distance: Website Fingerprinting Attacks and Defenses. In Proceedings of the 2012 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 605--616.
[8]
Aylin Caliskan, Fabian Yamaguchi, Edwin Dauber, Richard Harang, Konrad Rieck, Rachel Greenstadt, and Arvind Narayanan. 2018. When Coding Style Survives Compilation: De-anonymizing Programmers from Executable Binaries. In Proceedings of the 2018 Network and Distributed System Security Symposium (NDSS). Internet Society.
[9]
Xiaoyu Cao and Neil Zhenqiang Gong. 2017. Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification. In Proceedings of the 2017 Annual Computer Security Applications Conference (ACSAC). ACM, 278--287.
[10]
Nicholas Carlini and David Wagner. 2017. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (S&P). IEEE, 39--57.
[11]
Abdelberi Chaabane, Gergely Acs, and Mohamed Ali Kaafar. 2012. You Are What You Like! Information Leakage Through Users' Interests. In Proceedings of the 2012 Network and Distributed System Security Symposium (NDSS). Internet Society.
[12]
Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. 2011. Differentially Private Empirical Risk Minimization. Journal of Machine Learning Research (2011).
[13]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating Noise to Sensitivity in Private Data Analysis. In Proceedings of the 2006 Theory of Cryptography Conference (TCC). Springer, 265--284.
[14]
Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. 2015. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. In Proceedings of the 2015 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1322--1333.
[15]
Matt Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, David Page, and Thomas Ristenpart. 2014. Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing. In Proceedings of the 2014 USENIX Security Symposium (USENIX Security). USENIX, 17--32.
[16]
Karan Ganju, Qi Wang, Wei Yang, Carl A. Gunter, and Nikita Borisov. 2018. Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 619--633.
[17]
Neil Zhenqiang Gong and Bin Liu. 2016. You are Who You Know and How You Behave: Attribute Inference Attacks via Users' Social Friends and Behaviors. In Proceedings of the 2016 USENIX Security Symposium (USENIX Security). USENIX, 979--995.
[18]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Adversarial Nets. In Proceedings of the 2014 Annual Conference on Neural Information Processing Systems (NIPS). NIPS.
[19]
Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and Harnessing Adversarial Examples. In Proceedings of the 2015 International Conference on Learning Representations (ICLR).
[20]
Inken Hagestedt, Yang Zhang, Mathias Humbert, Pascal Berrang, Haixu Tang, XiaoFeng Wang, and Michael Backes. 2019. MBeacon: Privacy-Preserving Beacons for DNA Methylation Data. In Proceedings of the 2019 Network and Distributed System Security Symposium (NDSS). Internet Society.
[21]
Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2019. LOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks. Symposium on Privacy Enhancing Technologies Symposium (2019).
[22]
Dominik Herrmann, Rolf Wendolsky, and Hannes Federrath. 2009. Website Fingerprinting: Attacking Popular Privacy Enhancing Technologies with the Multinomial Naive-Bayes Classifier. In Proceedings of the 2009 ACM Cloud Computing Security Workshop (CCSW). ACM, 31--41.
[23]
Nils Homer, Szabolcs Szelinger, Margot Redman, David Duggan, Waibhav Tembe, Jill Muehling, John V. Pearson, Dietrich A. Stephan, Stanley F. Nelson, and David W. Craig. 2008. Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays. PLOS Genetics (2008).
[24]
Roger Iyengar, Joseph P. Near, Dawn Xiaodong Song, Om Dipakbhai Thakkar, Abhradeep Thakurta, and Lun Wang. 2019. Towards Practical Differentially Private Convex Optimization. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P). IEEE.
[25]
Bargav Jayaraman and David Evans. 2014. Evaluating Differentially Private Machine Learning in Practice. In Proceedings of the 2014 USENIX Security Symposium (USENIX Security). USENIX, 1895--1912.
[26]
Jinyuan Jia and Neil Zhenqiang Gong. 2018. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. In Proceedings of the 2018 USENIX Security Symposium (USENIX Security). USENIX.
[27]
Jinyuan Jia and Neil Zhenqiang Gong. 2019. Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges. CoRR abs/1909.08526 (2019).
[28]
Jinyuan Jia, Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. 2017. AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields. In Proceedings of the 2017 International Conference on World Wide Web (WWW). ACM, 1561--1569.
[29]
Marc Juarez, Sadia Afroz, Gunes Acar, Claudia Diaz, and Rachel Greenstadt. 2014. A Critical Evaluation of Website Fingerprinting Attacks. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 263--274.
[30]
Daniel Kifer, Adam Smith, and Abhradeep Thakurta. 2012. Private Convex Optimization for Empirical Risk Minimization with Applications to High-dimensional Regression. In Proceedings of the 2012 Annual Conference on Learning Theory (COLT). JMLR, 1--25.
[31]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2016. Adversarial Examples in the Physical World. CoRR abs/1607.02533 (2016).
[32]
Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2016. Delving into Transferable Adversarial Examples and Black-box Attacks. CoRR abs/1611.02770 (2016).
[33]
Yunhui Long, Vincent Bindschaedler, and Carl A. Gunter. 2017. Towards Measuring Membership Privacy. CoRR abs/1712.09136 (2017).
[34]
Yunhui Long, Vincent Bindschaedler, Lei Wang, Diyue Bu, Xiaofeng Wang, Haixu Tang, Carl A. Gunter, and Kai Chen. 2018. Understanding Membership Inferences on Well-Generalized Learning Models. CoRR abs/1802.04889 (2018).
[35]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In Proceedings of the 2018 International Conference on Learning Representations (ICLR).
[36]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting Unintended Feature Leakage in Collaborative Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P). IEEE.
[37]
Dongyu Meng and Hao Chen. 2017. MagNet: A Two-Pronged Defense against Adversarial Examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 135--147.
[38]
Xiaozhu Meng, Barton P Miller, and Somesh Jha. 2018. Adversarial Binaries for Authorship Identification. CoRR abs/1809.08316 (2018).
[39]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. 2017. Universal Adversarial Perturbations. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 1765--1773.
[40]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2574--2582.
[41]
Arvind Narayanan, Hristo S. Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. 2012. On the Feasibility of Internet-Scale Author Identification. In Proceedings of the 2012 IEEE Symposium on Security and Privacy (S&P). IEEE, 300--314.
[42]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Machine Learning with Membership Privacy using Adversarial Regularization. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM.
[43]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P). IEEE.
[44]
Seong Joon Oh, Max Augustin, Bernt Schiele, and Mario Fritz. 2018. Towards Reverse-Engineering Black-Box Neural Networks. In Proceedings of the 2018 International Conference on Learning Representations (ICLR).
[45]
Simon Oya, Carmela Troncoso, and Fernando Pérez-González. 2017. Back to the Drawing Board: Revisiting the Design of Optimal Location Privacy-preserving Mechanisms. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1943--1957.
[46]
Andriy Panchenko, Lukas Niessen, Andreas Zinnen, and Thomas Engel. 2011. Website Fingerprinting in Onion Routing Based Anonymization Networks. In Proceedings of the 2011 Workshop on Privacy in the Electronic Society (WPES). ACM, 103--114.
[47]
Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. 2016a. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples. CoRR abs/1605.07277 (2016).
[48]
Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. 2018. SoK: Towards the Science of Security and Privacy in Machine Learning. In Proceedings of the 2018 IEEE European Symposium on Security and Privacy (Euro S&P). IEEE.
[49]
Nicolas Papernot, Patrick D. McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical Black-Box Attacks Against Machine Learning. In Proceedings of the 2017 ACM Asia Conference on Computer and Communications Security (ASIACCS). ACM, 506--519.
[50]
Nicolas Papernot, Patrick D. McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2016b. The Limitations of Deep Learning in Adversarial Settings. In Proceedings of the 2016 IEEE European Symposium on Security and Privacy (Euro S&P). IEEE, 372--387.
[51]
Nicolas Papernot, Patrick D. McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. 2016c. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (S&P). IEEE, 582--597.
[52]
Apostolos Pyrgelis, Carmela Troncoso, and Emiliano De Cristofaro. 2018. Knock Knock, Who's There? Membership Inference on Aggregate Location Data. In Proceedings of the 2018 Network and Distributed System Security Symposium (NDSS). Internet Society.
[53]
Apostolos Pyrgelis, Carmela Troncoso, and Emiliano De Cristofaro. 2019. Under the Hood of Membership Inference Attacks on Aggregate Location Time-Series. CoRR abs/1902.07456 (2019).
[54]
Erwin Quiring, Alwin Maier, and Konrad Rieck. 2019. Misleading Authorship Attribution of Source Code using Adversarial Learning. In Proceedings of the 2019 USENIX Security Symposium (USENIX Security). USENIX, 479--496.
[55]
Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang Zhang. 2019 a. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning. CoRR abs/1904.01067 (2019).
[56]
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2019 b. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Proceedings of the 2019 Network and Distributed System Security Symposium (NDSS). Internet Society.
[57]
Reza Shokri and Vitaly Shmatikov. 2015. Privacy-Preserving Deep Learning. In Proceedings of the 2015 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 1310--1321.
[58]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership Inference Attacks Against Machine Learning Models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (S&P). IEEE, 3--18.
[59]
Liwei Song, Reza Shokri, and Prateek Mittal. 2019. Privacy Risks of Securing Machine Learning Models against Adversarial Examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM.
[60]
Shuang Song, Kamalika Chaudhuri, and Anand D. Sarwate. 2013. Stochastic Gradient Descent with Differentially Private Updates. In Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 245--248.
[61]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research (2014).
[62]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing Properties of Neural Networks. CoRR abs/1312.6199 (2013).
[63]
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. 2017. Ensemble Adversarial Training: Attacks and Defenses. In Proceedings of the 2017 International Conference on Learning Representations (ICLR).
[64]
Florian Tramér, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs. In Proceedings of the 2016 USENIX Security Symposium (USENIX Security). USENIX, 601--618.
[65]
Binghui Wang and Neil Zhenqiang Gong. 2018. Stealing Hyperparameters in Machine Learning. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (S&P). IEEE.
[66]
Di Wang, Minwei Ye, and Jinhui Xu. 2017. Differentially Private Empirical Risk Minimization Revisited: Faster and More General. In Proceedings of the 2017 Annual Conference on Neural Information Processing Systems (NIPS). NIPS, 2722--2731.
[67]
Tao Wang, Xiang Cai, Rishab Nithyanand, Rob Johnson, and Ian Goldberg. 2014. Effective Attacks and Provable Defenses for Website Fingerprinting. In Proceedings of the 2014 USENIX Security Symposium (USENIX Security). USENIX, 143--157.
[68]
Weilin Xu, David Evans, and Yanjun Qi. 2018. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In Proceedings of the 2018 Network and Distributed System Security Symposium (NDSS). Internet Society.
[69]
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting. In Proceedings of the 2018 IEEE Computer Security Foundations Symposium (CSF). IEEE.
[70]
Lei Yu, Ling Liu, Calton Pu, Mehmet Emre Gursoy, and Stacey Truex. 2019. Differentially Private Model Publishing for Deep Learning. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P). IEEE.
[71]
Xiaokuan Zhang, Jihun Hamm, Michael K. Reiter, and Yinqian Zhang. 2019. Statistical Privacy for Streaming Traffic. In Proceedings of the 2019 Network and Distributed System Security Symposium (NDSS). Internet Society.
[72]
Yang Zhang, Mathias Humbert, Tahleen Rahman, Cheng-Te Li, Jun Pang, and Michael Backes. 2018. Tagvisor: A Privacy Advisor for Sharing Hashtags. In Proceedings of the 2018 Web Conference (WWW). ACM, 287--296.
[73]
Yinqian Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2012. Cross-VM Side Channels and Their Use to Extract Private Keys. In Proceedings of the 2012 ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 305--316.

Cited By

View all
  • (2024)Adversarial Attacks in Machine Learning: Key Insights and Defense ApproachesApplied Data Science and Analysis10.58496/ADSA/2024/0112024(121-147)Online publication date: 7-Aug-2024
  • (2024)Lightweight Privacy Protection via Adversarial SampleElectronics10.3390/electronics1307123013:7(1230)Online publication date: 26-Mar-2024
  • (2024)Targeted Training Data Extraction—Neighborhood Comparison-Based Membership Inference Attacks in Large Language ModelsApplied Sciences10.3390/app1416711814:16(7118)Online publication date: 14-Aug-2024
  • Show More Cited By

Index Terms

  1. MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security
      November 2019
      2755 pages
      ISBN:9781450367479
      DOI:10.1145/3319535
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 November 2019

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. adversarial examples
      2. membership inference attacks
      3. privacy-preserving machine learning

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      CCS '19
      Sponsor:

      Acceptance Rates

      CCS '19 Paper Acceptance Rate 149 of 934 submissions, 16%;
      Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

      Upcoming Conference

      CCS '24
      ACM SIGSAC Conference on Computer and Communications Security
      October 14 - 18, 2024
      Salt Lake City , UT , USA

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)1,093
      • Downloads (Last 6 weeks)78
      Reflects downloads up to 18 Sep 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Adversarial Attacks in Machine Learning: Key Insights and Defense ApproachesApplied Data Science and Analysis10.58496/ADSA/2024/0112024(121-147)Online publication date: 7-Aug-2024
      • (2024)Lightweight Privacy Protection via Adversarial SampleElectronics10.3390/electronics1307123013:7(1230)Online publication date: 26-Mar-2024
      • (2024)Targeted Training Data Extraction—Neighborhood Comparison-Based Membership Inference Attacks in Large Language ModelsApplied Sciences10.3390/app1416711814:16(7118)Online publication date: 14-Aug-2024
      • (2024)Machine Learning with Confidential Computing: A Systematization of KnowledgeACM Computing Surveys10.1145/367000756:11(1-40)Online publication date: 3-Jun-2024
      • (2024)A Survey on Privacy of Personal and Non-Personal Data in B5G/6G NetworksACM Computing Surveys10.1145/366217956:10(1-37)Online publication date: 24-Jun-2024
      • (2024)PASTELProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36338087:4(1-29)Online publication date: 12-Jan-2024
      • (2024)Membership Inference Attack Using Self Influence Functions2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)10.1109/WACV57701.2024.00482(4880-4889)Online publication date: 3-Jan-2024
      • (2024)Safety and Performance, Why Not Both? Bi-Objective Optimized Model Compression Against Heterogeneous Attacks Toward AI Software DeploymentIEEE Transactions on Software Engineering10.1109/TSE.2023.334851550:3(376-390)Online publication date: Mar-2024
      • (2024)Protecting Inference Privacy With Accuracy Improvement in Mobile-Cloud Deep LearningIEEE Transactions on Mobile Computing10.1109/TMC.2023.332345023:6(6522-6537)Online publication date: Jun-2024
      • (2024)Rethinking Membership Inference Attacks Against Transfer LearningIEEE Transactions on Information Forensics and Security10.1109/TIFS.2024.341359219(6441-6454)Online publication date: 2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media