Nothing Special   »   [go: up one dir, main page]

What a lovely hat

Is it made out of tin foil?

Paper 2023/206

Orca: FSS-based Secure Training and Inference with GPUs

Neha Jawalkar, Indian Institute of Science
Kanav Gupta, Microsoft Research
Arkaprava Basu, Indian Institute of Science
Nishanth Chandran, Microsoft Research
Divya Gupta, Microsoft Research
Rahul Sharma, Microsoft Research
Abstract

Secure Two-party Computation (2PC) allows two parties to compute any function on their private inputs without revealing their inputs to each other. In the offline/online model for 2PC, correlated randomness that is independent of all inputs to the computation, is generated in a preprocessing (offline) phase and this randomness is then utilized in the online phase once the inputs to the parties become available. Most 2PC works focus on optimizing the online time as this overhead lies on the critical path. A recent paradigm for obtaining efficient 2PC protocols with low online cost is based on the cryptographic technique of function secret sharing (FSS). We build an end-to-end system ORCA to accelerate the computation of FSS-based 2PC protocols with GPUs. Next, we observe that the main performance bottleneck in such accelerated protocols is in storage (due to the large amount of correlated randomness), and we design new FSS-based 2PC protocols for several key functionalities in ML which reduce storage by up to 5×. Compared to prior state-of-the-art on secure training accelerated with GPUs in the same computation model (PIRANHA, Usenix Security 2022), we show that ORCA has 4% higher accuracy, 98× lesser communication, and is 26× faster on CIFAR-10. Moreover, maintaining training accuracy while using fixed-point needs stochastic truncations, and all prior works on secure fixed-point training (including PIRANHA) use insecure protocols for it. We provide the first secure protocol for stochastic truncations and build on it to provide the first evaluation of training with end-to-end security. For secure ImageNet inference, ORCA achieves sub-second latency for VGG-16 and ResNet-50, and outperforms the state-of-the-art by 8 − 103×.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. Minor revision. IEEE S&P 2024
DOI
10.1109/SP54263.2024.00063
Keywords
function secret sharingGPUsecure machine learningsecure multi-party computation
Contact author(s)
jawalkarp @ iisc ac in
kanav0610 @ gmail com
arkapravab @ iisc ac in
nichandr @ microsoft com
divya gupta @ microsoft com
rahsha @ microsoft com
History
2024-05-10: last of 2 revisions
2023-02-16: received
See all versions
Short URL
https://ia.cr/2023/206
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2023/206,
      author = {Neha Jawalkar and Kanav Gupta and Arkaprava Basu and Nishanth Chandran and Divya Gupta and Rahul Sharma},
      title = {Orca: {FSS}-based Secure Training and Inference with {GPUs}},
      howpublished = {Cryptology {ePrint} Archive, Paper 2023/206},
      year = {2023},
      doi = {10.1109/SP54263.2024.00063},
      url = {https://eprint.iacr.org/2023/206}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.