DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy
DOI:
https://doi.org/10.1609/aaai.v36i6.20586Keywords:
Machine Learning (ML)Abstract
Training deep neural networks (DNNs) for meaningful differential privacy (DP) guarantees severely degrades model utility. In this paper, we demonstrate that the architecture of DNNs has a significant impact on model utility in the context of private deep learning, whereas its effect is largely unexplored in previous studies. In light of this missing, we propose the very first framework that employs neural architecture search to automatic model design for private deep learning, dubbed as DPNAS. To integrate private learning with architecture search, a DP-aware approach is introduced for training candidate models composed on a delicately defined novel search space. We empirically certify the effectiveness of the proposed framework. The searched model DPNASNet achieves state-of-the-art privacy/utility trade-offs, e.g., for the privacy budget of (epsilon, delta)=(3, 1e-5), our model obtains test accuracy of 98.57% on MNIST, 88.09% on FashionMNIST, and 68.33% on CIFAR-10. Furthermore, by studying the generated architectures, we provide several intriguing findings of designing private-learning-friendly DNNs, which can shed new light on model design for deep learning with differential privacy.Downloads
Published
2022-06-28
How to Cite
Cheng, A., Wang, J., Zhang, X. S., Chen, Q., Wang, P., & Cheng, J. (2022). DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6358-6366. https://doi.org/10.1609/aaai.v36i6.20586
Issue
Section
AAAI Technical Track on Machine Learning I