Adaptive differentially private empirical risk minimization
arXiv preprint arXiv:2110.07435, 2021•arxiv.org
We propose an adaptive (stochastic) gradient perturbation method for differentially private
empirical risk minimization. At each iteration, the random noise added to the gradient is
optimally adapted to the stepsize; we name this process adaptive differentially private (ADP)
learning. Given the same privacy budget, we prove that the ADP method considerably
improves the utility guarantee compared to the standard differentially private method in
which vanilla random noise is added. Our method is particularly useful for gradient-based …
empirical risk minimization. At each iteration, the random noise added to the gradient is
optimally adapted to the stepsize; we name this process adaptive differentially private (ADP)
learning. Given the same privacy budget, we prove that the ADP method considerably
improves the utility guarantee compared to the standard differentially private method in
which vanilla random noise is added. Our method is particularly useful for gradient-based …
We propose an adaptive (stochastic) gradient perturbation method for differentially private empirical risk minimization. At each iteration, the random noise added to the gradient is optimally adapted to the stepsize; we name this process adaptive differentially private (ADP) learning. Given the same privacy budget, we prove that the ADP method considerably improves the utility guarantee compared to the standard differentially private method in which vanilla random noise is added. Our method is particularly useful for gradient-based algorithms with time-varying learning rates, including variants of AdaGrad (Duchi et al., 2011). We provide extensive numerical experiments to demonstrate the effectiveness of the proposed adaptive differentially private algorithm.
arxiv.org