Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3407023.3407045acmotherconferencesArticle/Chapter ViewAbstractPublication PagesaresConference Proceedingsconference-collections
research-article

MP2ML: a mixed-protocol machine learning framework for private inference

Published: 25 August 2020 Publication History

Abstract

Privacy-preserving machine learning (PPML) has many applications, from medical image classification and anomaly detection to financial analysis. nGraph-HE enables data scientists to perform private inference of deep learning (DL) models trained using popular frameworks such as TensorFlow. nGraph-HE computes linear layers using the CKKS homomorphic encryption (HE) scheme. The non-polynomial activation functions, such as MaxPool and ReLU, are evaluated in the clear by the data owner who obtains the intermediate feature maps. This leaks the feature maps to the data owner from which it may be possible to deduce the DL model weights. As a result, such protocols may not be suitable for deployment, especially when the DL model is intellectual property.
In this work, we present MP2ML, a machine learning framework which integrates nGraph-HE and the secure two-party computation framework ABY, to overcome the limitations of leaking the intermediate feature maps to the data owner. We introduce a novel scheme for the conversion between CKKS and secure multi-party computation to execute DL inference while maintaining the privacy of both the input data and model weights. MP2ML is compatible with popular DL frameworks such as TensorFlow that can infer pre-trained neural networks with native ReLU activations. We benchmark MP2ML on the CryptoNets network with ReLU activations, on which it achieves a throughput of 33.3 images/s and an accuracy of 98.6%. This throughput matches the previous state-of-the-art work, even though our protocol is more accurate and scalable.

References

[1]
Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, and Michael Isard. 2016. Tensorflow: A system for large-scale machine learning. In USENIX Operating Systems Design and Implementation'16.
[2]
David W Archer, José Manuel Calderón Trilla, Jason Dagit, Alex Malozemoff, Yuriy Polyakov, Kurt Rohloff, and Gerard Ryan. 2019. RAM-PARTS: A Programmer-Friendly System for Building Homomorphic Encryption Applications. In WAHC'19.
[3]
Gilad Asharov, Yehuda Lindell, Thomas Schneider, and Michael Zohner. 2013. More efficient oblivious transfer and extensions for faster secure computation. In CCS'13.
[4]
Mauro Barni, Pierluigi Failla, Vladimir Kolesnikov, Riccardo Lazzeretti, Ahmad-Reza Sadeghi, and Thomas Schneider. 2009. Secure evaluation of private linear branching programs with medical applications. In ESORICS'09.
[5]
Mauro Barni, Pierluigi Failla, Riccardo Lazzeretti, Ahmad-Reza Sadeghi, and Thomas Schneider. 2011. Privacy-Preserving ECG Classification with Branching Programs and Neural Networks. TIFS'11.
[6]
Mauro Barni, Claudio Orlandi, and Alessandro Piva. 2006. A privacy-preserving protocol for neural-network-based computation. In Workshop on Multimedia and Security.
[7]
Donald Beaver, Silvio Micali, and Phillip Rogaway. 1990. The round complexity of secure protocols. In STOC'90.
[8]
Mihir Bellare, Viet Tung Hoang, Sriram Keelveedhi, and Phillip Rogaway. 2013. Efficient garbling from a fixed-key blockcipher. In S&P'13.
[9]
Fabian Boemer, Rosario Cammarota, Daniel Demmler, Thomas Schneider, and Hossein Yalame. 2020. MP2ML: A Mixed-Protocol Machine Learning Framework for Private Inference (full version). IACR Cryptology ePrint Archive 2020/721.
[10]
Fabian Boemer, Anamaria Costache, Rosario Cammarota, and Casimir Wierzynski. 2019. nGraph-HE2: A High-Throughput Framework for Neural Network Inference on Encrypted Data. In WAHC'19.
[11]
Fabian Boemer, Yixing Lao, Rosario Cammarota, and Casimir Wierzynski. 2019. nGraph-HE: a graph compiler for deep learning on homomorphically encrypted data. In ACM International Conference on Computing Frontiers.
[12]
Christina Boura, Nicolas Gama, and Mariya Georgieva. 2018. Chimera: a unified framework for B/FV, TFHE and HEAAN fully homomorphic encryption and predictions for deep learning. IACR Cryptology ePrint Archive 2018/758 (2018).
[13]
Zvika Brakerski. 2012. Fully homomorphic encryption without modulus switching from classical GapSVP. In CRYPTO'12.
[14]
Justin Brickell, Donald E Porter, Vitaly Shmatikov, and Emmett Witchel. 2007. Privacy-preserving remote diagnostics. In CCS'07.
[15]
Nicholas Carlini, Matthew Jagielski, and Ilya Mironov. 2020. Crypt-analytic Extraction of Neural Network Models. arXiv preprint arXiv:2003.04884 (2020).
[16]
CEA-LIST. 2019. Cingulata. https://github.com/CEA-LIST/Cingulata.
[17]
Guoxing Chen, Sanchuan Chen, Yuan Xiao, Yinqian Zhang, Zhiqiang Lin, and Ten H Lai. 2019. Sgxpectre attacks: Stealing intel secrets from sgx enclaves via speculative execution. EUROS&P'19 (2019).
[18]
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. 2015. MXNet: A flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274 (2015).
[19]
Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, and Luis Ceze. 2018. TVM: An automated end-to-end optimizing compiler for deep learning. In USENIX Operating Systems Design and Implementation'18.
[20]
Jung Hee Cheon, Andrey Kim, Miran Kim, and Yongsoo Song. 2017. Homomorphic Encryption for Arithmetic of Approximate Numbers. In ASIACRYPT'17.
[21]
Ilaria Chillotti, Nicolas Gama, Mariya Georgieva, and Malika Izabachene. 2016. Faster fully homomorphic encryption: Bootstrapping in less than 0.1 seconds. In ASIACRYPT'16.
[22]
Scott Cyphers, Arjun K Bansal, Anahita Bhiwandiwalla, Jayaram Bobba, Matthew Brookhart, Avijit Chakraborty, Will Constable, Christian Convey, Leona Cook, and Omar Kanawi. 2018. Intel nGraph: an intermediate representation, compiler, and executor for deep learning. arXiv preprint arXiv:1801.08058 (2018).
[23]
Morten Dahl, Jason Mancuso, Yann Dupis, Ben Decoste, Morgan Giraud, Ian Livingstone, Justin Patriquin, and Gavin Uhma. 2018. Private machine learning in tensorflow using secure computation. arXiv preprint arXiv:1810.08130 (2018).
[24]
Anders Dalskov, Daniel Escudero, and Marcel Keller. 2019. Secure Evaluation of Quantized Neural Networks. IACR Cryptology ePrint Archive, Report 2019/131. https://eprint.iacr.org/2019/131.
[25]
Data61. 2019. MP-SPDZ - Versatile framework for multi-party computation. https://github.com/data61/MP-SPDZ.
[26]
Roshan Dathathri, Olli Saarikivi, Hao Chen, Kim Laine, Kristin Lauter, Saeed Maleki, Madanlal Musuvathi, and Todd Mytkowicz. 2019. CHET: an optimizing compiler for fully-homomorphic neural-network inferencing. In PLDI'19.
[27]
Daniel Demmler, Thomas Schneider, and Michael Zohner. 2015. ABY-A Framework for Efficient Mixed-Protocol Secure Two-Party Computation. In NDSS'15.
[28]
Ghada Dessouky, Farinaz Koushanfar, Ahmad-Reza Sadeghi, Thomas Schneider, Shaza Zeitouni, and Michael Zohner. 2017. Pushing the Communication Barrier in Secure Computation using Lookup Tables. In NDSS'17.
[29]
Junfeng Fan and Frederik Vercauteren. 2012. Somewhat Practical Fully Homomorphic Encryption. IACR Cryptology ePrint Archive 2012/144.
[30]
Craig Gentry. 2009. A fully homomorphic encryption scheme. Stanford University PhD Thesis.
[31]
Ran Gilad-Bachrach, Nathan Dowlin, Kim Laine, Kristin Lauter, Michael Naehrig, and John Wernsing. 2016. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In International Conference on Machine Learning.
[32]
Oded Goldreich. 2004. The Foundations of Cryptography - Volume 2, Basic Applications. Cambridge University Press.
[33]
Oded Goldreich, Silvio Micali, and Avi Wigderson. 1987. How to play any mental game. In STOC'87.
[34]
Wilko Henecka, Stefan Kögl, Ahmad-Reza Sadeghi, Thomas Schneider, and Immo Wehrenberg. 2010. TASTY: Tool for Automating Secure Two-party Computations. In CCS'10.
[35]
Ehsan Hesamifard, Hassan Takabi, Mehdi Ghasemi, and Rebecca N. Wright. 2018. Privacy-preserving Machine Learning as a Service. PETS'18.
[36]
Mohamad Javadi, Hossein Yalame, and Hamid Mahdiani. 2020. Small Constant Mean-Error Imprecise Adder/Multiplier for Efficient VLSI Implementation of MAC-based Applications. TC'20 (2020).
[37]
Chiraag Juvekar, Vinod Vaikuntanathan, and Anantha Chandrakasan. 2018. GAZELLE: A Low Latency Framework for Secure Neural Network Inference. In USENIX Security'18.
[38]
Ágnes Kiss, Masoud Naderpour, Jian Liu, N Asokan, and Thomas Schneider. 2019. SoK: Modular and efficient private decision tree evaluation. PETS'19 (2019).
[39]
Vladimir Kolesnikov and Ranjit Kumaresan. 2013. Improved OT extension for transferring short secrets. In CRYPTO'13.
[40]
Vladimir Kolesnikov and Thomas Schneider. 2008. Improved garbled circuit: Free XOR gates and applications. In ICALP'08.
[41]
Vladimir Kolesnikov and Thomas Schneider. 2008. A practical universal circuit construction and secure evaluation of private functions. In FC'08.
[42]
Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. 2014. The CIFAR-10 dataset. http://www.cs.toronto.edu/kriz/cifar.html.
[43]
Nishant Kumar, Mayank Rathee, Nishanth Chandran, Divya Gupta, Aseem Rastogi, and Rahul Sharma. 2020. CrypTFlow: Secure Tensor-Flow Inference. In S&P'20.
[44]
Yann LeCun and Corinna Cortes. 2010. MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist/.
[45]
Yehuda Lindell and Benny Pinkas. 2009. A Proof of Security of Yao's Protocol for Two-Party Computation. Journal of Cryptology (2009).
[46]
Jian Liu, Mika Juuti, Yao Lu, and Nadarajah Asokan. 2017. Oblivious neural network predictions via MiniONN transformations. In CCS'17.
[47]
Dahlia Malkhi, Noam Nisan, Benny Pinkas, and Yaron Sella. 2004. Fairplay---A Secure Two-Party Computation System. (2004).
[48]
Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa. 2020. DELPHI: A Cryptographic Inference Service for Neural Networks. In USENIX Security.
[49]
Payman Mohassel and Peter Rindal. 2018. ABY3: A Mixed Protocol Framework for Machine Learning. In CCS '18.
[50]
Payman Mohassel and Yupeng Zhang. 2017. SecureML: A system for scalable privacy-preserving machine learning. In S&P'17.
[51]
Claudio Orlandi, Alessandro Piva, and Mauro Barni. 2007. Oblivious neural network computing via homomorphic encryption. Journal on Information Security (2007).
[52]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Neural Information Processing Systems.
[53]
Robert Nikolai Reith, Thomas Schneider, and Oleksandr Tkachenko. 2019. Efficiently Stealing your Machine Learning Models. In WPES'19.
[54]
M Sadegh Riazi, Mohammad Samragh, Hao Chen, Kim Laine, Kristin E Lauter, and Farinaz Koushanfar. 2019. XONN: XNOR-based Oblivious Deep Neural Network Inference. In USENIX Security'19.
[55]
M Sadegh Riazi, Christian Weinert, Oleksandr Tkachenko, Ebrahim M Songhori, Thomas Schneider, and Farinaz Koushanfar. 2018. Chameleon: A hybrid secure computation framework for machine learning applications. In ASIACCS'18.
[56]
Ronald L. Rivest, Len Adleman, and Michael L. Dertouzos. 1978. On Data Banks and Privacy Homomorphisms. Foundations of Secure Computation, Academia Press (1978).
[57]
Kurt Rohloff. 2019. The PALISADE Lattice Cryptography Library. https://git.njit.edu/palisade/PALISADE.
[58]
Theo Ryffel, Andrew Trask, Morten Dahl, Bobby Wagner, Jason Mancuso, Daniel Rueckert, and Jonathan Passerat-Palmbach. 2018. A generic framework for privacy preserving deep learning. arXiv preprint arXiv:1811.04017 (2018).
[59]
Ahmad-Reza Sadeghi and Thomas Schneider. 2008. Generalized Universal Circuits for Secure Evaluation of Private Functions with Application to Data Classification. In ICISC'08.
[60]
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. MobileNetV2: Inverted residuals and linear bottlenecks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR'18).
[61]
Amartya Sanyal, Matt J Kusner, Adria Gascon, and Varun Kanade. 2018. TAPAS: Tricks to accelerate (encrypted) prediction as a service. arXiv preprint arXiv:1806.03461 (2018).
[62]
Thomas Schneider and Michael Zohner. 2013. GMW vs. Yao? Efficient secure two-party computation with low depth circuits. In FC'13.
[63]
SEAL 2019. Microsoft SEAL (release 3.4). https://github.com/Microsoft/SEAL. Microsoft Research, Redmond, WA.
[64]
Raymond KH Tai, Jack PK Ma, Yongjun Zhao, and Sherman SM Chow. 2017. Privacy-preserving decision trees evaluation via linear functions. In ESORICS'17.
[65]
Florian Tramèr, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. 2016. Stealing machine learning models via prediction APIs. In USENIX Security'16.
[66]
Florian Tramèr and Dan Boneh. 2019. Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. In International Conference on Learning Representations (ICLR'19).
[67]
Amos Treiber, Alejandro Molina, Christian Weinert, Thomas Schneider, and Kristian Kersting. 2020. CryptoSPN: Privacy-preserving Sum-Product Network Inference. ECAI'20 (2020).
[68]
Leslie G Valiant. 1976. Universal circuits (preliminary report). In STOC'76.
[69]
Paul Voigt and Axel Von dem Bussche. 2017. The EU General Data Protection Regulation (GDPR). Springer (2017).
[70]
Sameer Wagh, Divya Gupta, and Nishanth Chandran. 2019. SecureNN: 3-Party Secure Computation for Neural Network Training. PETS'19 (2019).
[71]
Casimir Wierzynski and Abigail Wen. 2018. Advancing both A.I. and privacy is not a zero-sum game. http://fortune.com/2018/12/27/ai-privacy.
[72]
David J Wu, Tony Feng, Michael Naehrig, and Kristin Lauter. 2016. Privately evaluating decision trees and random forests. PETS'16 (2016).
[73]
Hossein Yalame, Hossein Farzam, and Siavash Bayat-Sarmadi. 2017. Secure Two-Party Computation Using an Efficient Garbled Circuit by Reducing Data Transfer. In ATIS'17.
[74]
Andrew Chi-Chih Yao. 1986. How to generate and exchange secrets. In FOCS'86.
[75]
Samee Zahur, Mike Rosulek, and David Evans. 2015. Two halves make a whole: Reducing data transfer in garbled circuits using half gates. In EUROCRYPT'15.

Cited By

View all
  • (2023)Privacy-Enhanced Knowledge Transfer with Collaborative Split Learning over Teacher EnsemblesProceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop10.1145/3591197.3591303(1-13)Online publication date: 10-Jul-2023
  • (2023)Characterizing and Optimizing End-to-End Systems for Private InferenceProceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 310.1145/3582016.3582065(89-104)Online publication date: 25-Mar-2023
  • (2023)Trustworthy AI: From Principles to PracticesACM Computing Surveys10.1145/355580355:9(1-46)Online publication date: 16-Jan-2023
  • Show More Cited By

Index Terms

  1. MP2ML: a mixed-protocol machine learning framework for private inference

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    ARES '20: Proceedings of the 15th International Conference on Availability, Reliability and Security
    August 2020
    1073 pages
    ISBN:9781450388337
    DOI:10.1145/3407023
    • Program Chairs:
    • Melanie Volkamer,
    • Christian Wressnegger
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 25 August 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. homomorphic encryption
    2. private machine learning
    3. secure multi-party computation

    Qualifiers

    • Research-article

    Funding Sources

    • Deutsche Forschungsgemeinschaft (DFG)
    • GRK 2050 Privacy & Trust/251805230
    • Hessen State Ministry for Higher Education, Research and the Arts
    • European Research Council (ERC)
    • German Federal Ministry of Education and Research

    Conference

    ARES 2020

    Acceptance Rates

    Overall Acceptance Rate 228 of 451 submissions, 51%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)140
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 26 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Privacy-Enhanced Knowledge Transfer with Collaborative Split Learning over Teacher EnsemblesProceedings of the 2023 Secure and Trustworthy Deep Learning Systems Workshop10.1145/3591197.3591303(1-13)Online publication date: 10-Jul-2023
    • (2023)Characterizing and Optimizing End-to-End Systems for Private InferenceProceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 310.1145/3582016.3582065(89-104)Online publication date: 25-Mar-2023
    • (2023)Trustworthy AI: From Principles to PracticesACM Computing Surveys10.1145/355580355:9(1-46)Online publication date: 16-Jan-2023
    • (2022)On Matrix Multiplication with Homomorphic EncryptionProceedings of the 2022 on Cloud Computing Security Workshop10.1145/3560810.3564267(53-61)Online publication date: 7-Nov-2022
    • (2022)Privacy-Preserving Fair Learning of Support Vector Machine with Homomorphic EncryptionProceedings of the ACM Web Conference 202210.1145/3485447.3512252(3572-3583)Online publication date: 25-Apr-2022
    • (2022)Toward Industrial Private AI: A Two-Tier Framework for Data and Model SecurityIEEE Wireless Communications10.1109/MWC.001.210047929:2(76-83)Online publication date: 1-Apr-2022
    • (2022)Fast homomorphic SVM inference on encrypted dataNeural Computing and Applications10.1007/s00521-022-07202-834:18(15555-15573)Online publication date: 1-Sep-2022
    • (2021)Not All Features Are Equal: Discovering Essential Features for Preserving Prediction PrivacyProceedings of the Web Conference 202110.1145/3442381.3449965(669-680)Online publication date: 19-Apr-2021

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media