default search action
Matthew Jagielski
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c32]Matthew Jagielski, Om Thakkar, Lun Wang:
Noise Masking Attacks and Defenses for Pretrained Speech Models. ICASSP 2024: 4810-4814 - [c31]Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing part of a production language model. ICML 2024 - [c30]Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr:
Auditing Private Prediction. ICML 2024 - [c29]Aldo G. Carranza, Rezsa Farahani, Natalia Ponomareva, Alexey Kurakin, Matthew Jagielski, Milad Nasr:
Synthetic Query Generation for Privacy-Preserving Deep Retrieval Systems using Differentially Private Language Models. NAACL-HLT 2024: 3920-3930 - [c28]Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum S. Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr:
Poisoning Web-Scale Training Datasets is Practical. SP 2024: 407-425 - [c27]Edoardo Debenedetti, Giorgio Severi, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Eric Wallace, Nicholas Carlini, Florian Tramèr:
Privacy Side Channels in Machine Learning Systems. USENIX Security Symposium 2024 - [i42]Karan Chadha, Matthew Jagielski, Nicolas Papernot, Christopher A. Choquette-Choo, Milad Nasr:
Auditing Private Prediction. CoRR abs/2402.09403 (2024) - [i41]Nicholas Carlini, Daniel Paleka, Krishnamurthy (Dj) Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr:
Stealing Part of a Production Language Model. CoRR abs/2403.06634 (2024) - [i40]Matthew Jagielski, Om Thakkar, Lun Wang:
Noise Masking Attacks and Defenses for Pretrained Speech Models. CoRR abs/2404.02052 (2024) - [i39]Harsh Chaudhari, Giorgio Severi, John Abascal, Matthew Jagielski, Christopher A. Choquette-Choo, Milad Nasr, Cristina Nita-Rotaru, Alina Oprea:
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation. CoRR abs/2405.20485 (2024) - [i38]Dariush Wahdany, Matthew Jagielski, Adam Dziedzic, Franziska Boenisch:
Beyond the Mean: Differentially Private Prototypes for Private Transfer Learning. CoRR abs/2406.08039 (2024) - [i37]Ilia Shumailov, Jamie Hayes, Eleni Triantafillou, Guillermo Ortiz-Jiménez, Nicolas Papernot, Matthew Jagielski, Itay Yona, Heidi Howard, Eugene Bagdasaryan:
UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI. CoRR abs/2407.00106 (2024) - [i36]Thomas Steinke, Milad Nasr, Arun Ganesh, Borja Balle, Christopher A. Choquette-Choo, Matthew Jagielski, Jamie Hayes, Abhradeep Guha Thakurta, Adam D. Smith, Andreas Terzis:
The Last Iterate Advantage: Empirical Auditing and Principled Heuristic Analysis of Differentially Private SGD. CoRR abs/2410.06186 (2024) - 2023
- [j2]Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan R. Ullman, Roxana Geambasu:
How to Combine Membership-Inference Attacks on Multiple Updated Machine Learning Models. Proc. Priv. Enhancing Technol. 2023(3): 211-232 (2023) - [c26]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, Chiyuan Zhang:
Quantifying Memorization Across Neural Language Models. ICLR 2023 - [c25]Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Guha Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. ICLR 2023 - [c24]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy. INLG 2023: 28-53 - [c23]Thomas Steinke, Milad Nasr, Matthew Jagielski:
Privacy Auditing with One (1) Training Run. NeurIPS 2023 - [c22]Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Pang Wei Koh, Daphne Ippolito, Florian Tramèr, Ludwig Schmidt:
Are aligned neural networks adversarially aligned? NeurIPS 2023 - [c21]Matthew Jagielski, Milad Nasr, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini, Florian Tramèr:
Students Parrot Their Teachers: Membership Inference on Model Distillation. NeurIPS 2023 - [c20]Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Counterfactual Memorization in Neural Language Models. NeurIPS 2023 - [c19]Harsh Chaudhari, Matthew Jagielski, Alina Oprea:
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning. SaTML 2023: 176-196 - [c18]Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan R. Ullman:
SNAP: Efficient Extraction of Private Properties with Poisoning. SP 2023: 400-417 - [c17]Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis:
Tight Auditing of Differentially Private Machine Learning. USENIX Security Symposium 2023: 1631-1648 - [c16]Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace:
Extracting Training Data from Diffusion Models. USENIX Security Symposium 2023: 5253-5270 - [i35]Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace:
Extracting Training Data from Diffusion Models. CoRR abs/2301.13188 (2023) - [i34]Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis:
Tight Auditing of Differentially Private Machine Learning. CoRR abs/2302.07956 (2023) - [i33]Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum S. Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr:
Poisoning Web-Scale Training Datasets is Practical. CoRR abs/2302.10149 (2023) - [i32]Keane Lucas, Matthew Jagielski, Florian Tramèr, Lujo Bauer, Nicholas Carlini:
Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators. CoRR abs/2302.13464 (2023) - [i31]Matthew Jagielski, Milad Nasr, Christopher A. Choquette-Choo, Katherine Lee, Nicholas Carlini:
Students Parrot Their Teachers: Membership Inference on Model Distillation. CoRR abs/2303.03446 (2023) - [i30]Rachel Cummings, Damien Desfontaines, David Evans, Roxana Geambasu, Matthew Jagielski, Yangsibo Huang, Peter Kairouz, Gautam Kamath, Sewoong Oh, Olga Ohrimenko, Nicolas Papernot, Ryan Rogers, Milan Shen, Shuang Song, Weijie J. Su, Andreas Terzis, Abhradeep Thakurta, Sergei Vassilvitskii, Yu-Xiang Wang, Li Xiong, Sergey Yekhanin, Da Yu, Huanyu Zhang, Wanrong Zhang:
Challenges towards the Next Frontier in Privacy. CoRR abs/2304.06929 (2023) - [i29]Aldo Gael Carranza, Rezsa Farahani, Natalia Ponomareva, Alex Kurakin, Matthew Jagielski, Milad Nasr:
Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models. CoRR abs/2305.05973 (2023) - [i28]Thomas Steinke, Milad Nasr, Matthew Jagielski:
Privacy Auditing with One (1) Training Run. CoRR abs/2305.08846 (2023) - [i27]Matthew Jagielski:
A Note On Interpreting Canary Exposure. CoRR abs/2306.00133 (2023) - [i26]Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramèr, Ludwig Schmidt:
Are aligned neural networks adversarially aligned? CoRR abs/2306.15447 (2023) - [i25]Nikhil Kandpal, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Backdoor Attacks for In-Context Learning with Language Models. CoRR abs/2307.14692 (2023) - [i24]Edoardo Debenedetti, Giorgio Severi, Nicholas Carlini, Christopher A. Choquette-Choo, Matthew Jagielski, Milad Nasr, Eric Wallace, Florian Tramèr:
Privacy Side Channels in Machine Learning Systems. CoRR abs/2309.05610 (2023) - [i23]Milad Nasr, Nicholas Carlini, Jonathan Hayase, Matthew Jagielski, A. Feder Cooper, Daphne Ippolito, Christopher A. Choquette-Choo, Eric Wallace, Florian Tramèr, Katherine Lee:
Scalable Extraction of Training Data from (Production) Language Models. CoRR abs/2311.17035 (2023) - 2022
- [c15]Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini:
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CCS 2022: 2779-2792 - [c14]Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru:
Network-Level Adversaries in Federated Learning. CNS 2022: 19-27 - [c13]Avijit Ghosh, Matthew Jagielski, Christo Wilson:
Subverting Fair Image Search with Generative Adversarial Perturbations. FAccT 2022: 637-650 - [c12]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr:
The Privacy Onion Effect: Memorization is Relative. NeurIPS 2022 - [i22]Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramèr, Chiyuan Zhang:
Quantifying Memorization Across Neural Language Models. CoRR abs/2202.07646 (2022) - [i21]Florian Tramèr, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski, Nicholas Carlini:
Debugging Differential Privacy: A Case Study for Privacy Auditing. CoRR abs/2202.12219 (2022) - [i20]Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini:
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. CoRR abs/2204.00032 (2022) - [i19]Avijit Ghosh, Matthew Jagielski, Christo Wilson:
Subverting Fair Image Search with Generative Adversarial Perturbations. CoRR abs/2205.02414 (2022) - [i18]Matthew Jagielski, Stanley Wu, Alina Oprea, Jonathan R. Ullman, Roxana Geambasu:
How to Combine Membership-Inference Attacks on Multiple Updated Models. CoRR abs/2205.06369 (2022) - [i17]Harsh Chaudhari, Matthew Jagielski, Alina Oprea:
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning. CoRR abs/2205.09986 (2022) - [i16]Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramèr:
The Privacy Onion Effect: Memorization is Relative. CoRR abs/2206.10469 (2022) - [i15]Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang:
Measuring Forgetting of Memorized Training Examples. CoRR abs/2207.00099 (2022) - [i14]Harsh Chaudhari, John Abascal, Alina Oprea, Matthew Jagielski, Florian Tramèr, Jonathan R. Ullman:
SNAP: Efficient Extraction of Private Properties with Poisoning. CoRR abs/2208.12348 (2022) - [i13]Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru:
Network-Level Adversaries in Federated Learning. CoRR abs/2208.12911 (2022) - [i12]Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini:
Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy. CoRR abs/2210.17546 (2022) - [i11]Harsh Chaudhari, Matthew Jagielski, Alina Oprea:
SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning. IACR Cryptol. ePrint Arch. 2022: 663 (2022) - 2021
- [j1]Shan Chen, Samuel Jero, Matthew Jagielski, Alexandra Boldyreva, Cristina Nita-Rotaru:
Secure Communication Channel Establishment: TLS 1.3 (over TCP Fast Open) versus QUIC. J. Cryptol. 34(3): 26 (2021) - [c11]Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea:
Subpopulation Data Poisoning Attacks. CCS 2021: 3104-3122 - [c10]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. USENIX Security Symposium 2021: 2633-2650 - [i10]Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini:
Counterfactual Memorization in Neural Language Models. CoRR abs/2112.12938 (2021) - 2020
- [c9]Nicholas Carlini, Matthew Jagielski, Ilya Mironov:
Cryptanalytic Extraction of Neural Network Models. CRYPTO (3) 2020: 189-218 - [c8]Matthew Jagielski, Jonathan R. Ullman, Alina Oprea:
Auditing Differentially Private Machine Learning: How Private is Private SGD? NeurIPS 2020 - [c7]Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot:
High Accuracy and High Fidelity Extraction of Neural Networks. USENIX Security Symposium 2020: 1345-1362 - [i9]Nicholas Carlini, Matthew Jagielski, Ilya Mironov:
Cryptanalytic Extraction of Neural Network Models. CoRR abs/2003.04884 (2020) - [i8]Matthew Jagielski, Jonathan R. Ullman, Alina Oprea:
Auditing Differentially Private Machine Learning: How Private is Private SGD? CoRR abs/2006.07709 (2020) - [i7]Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea:
Subpopulation Data Poisoning Attacks. CoRR abs/2006.14026 (2020) - [i6]Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, Colin Raffel:
Extracting Training Data from Large Language Models. CoRR abs/2012.07805 (2020)
2010 – 2019
- 2019
- [c6]Shan Chen, Samuel Jero, Matthew Jagielski, Alexandra Boldyreva, Cristina Nita-Rotaru:
Secure Communication Channel Establishment: TLS 1.3 (over TCP Fast Open) vs. QUIC. ESORICS (1) 2019: 404-426 - [c5]Matthew Jagielski, Michael J. Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan R. Ullman:
Differentially Private Fair Learning. ICML 2019: 3000-3008 - [c4]Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli:
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks. USENIX Security Symposium 2019: 321-338 - [i5]Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot:
High-Fidelity Extraction of Neural Network Models. CoRR abs/1909.01838 (2019) - [i4]Shan Chen, Samuel Jero, Matthew Jagielski, Alexandra Boldyreva, Cristina Nita-Rotaru:
Secure Communication Channel Establishment: TLS 1.3 (over TCP Fast Open) vs. QUIC. IACR Cryptol. ePrint Arch. 2019: 433 (2019) - 2018
- [c3]Hengyi Liang, Matthew Jagielski, Bowen Zheng, Chung-Wei Lin, Eunsuk Kang, Shinichi Shiraishi, Cristina Nita-Rotaru, Qi Zhu:
Network and system level security in connected vehicle applications. ICCAD 2018: 94 - [c2]Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li:
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. IEEE Symposium on Security and Privacy 2018: 19-35 - [c1]Matthew Jagielski, Nicholas Jones, Chung-Wei Lin, Cristina Nita-Rotaru, Shinichi Shiraishi:
Threat Detection for Collaborative Adaptive Cruise Control in Connected Cars. WISEC 2018: 184-189 - [i3]Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li:
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. CoRR abs/1804.00308 (2018) - [i2]Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli:
On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks. CoRR abs/1809.02861 (2018) - [i1]Matthew Jagielski, Michael J. Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan R. Ullman:
Differentially Private Fair Learning. CoRR abs/1812.02696 (2018)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-19 20:47 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint