Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Long-Term Privacy-Preserving Aggregation With User-Dynamics for Federated Learning

Published: 01 January 2023 Publication History

Abstract

Privacy-preserving aggregation protocol is an essential building block in privacy-enhanced federated learning (FL), which enables the server to obtain the sum of users’ locally trained models while keeping local training data private. However, most of the work on privacy-preserving aggregation provides privacy guarantees for only one communication round in FL. In fact, as FL usually involves long-term training, i.e., multiple rounds, it may lead to more information leakages due to the dynamic user participation over rounds. In this connection, we propose a long-term privacy-preserving aggregation (LTPA) protocol providing both single-round and multi-round privacy guarantees. Specifically, we first introduce our batch-partitioning-dropping-updating (BPDU) strategy that enables any user-dynamic FL system to provide multi-round privacy guarantees. Then we present our LTPA construction which integrates our proposed BPDU strategy with the state-of-the-art privacy-preserving aggregation protocol. Furthermore, we investigate the impact of LTPA parameter settings on the trade-off between privacy guarantee, protocol efficiency, and FL convergence performance from both theoretical and experimental perspectives. Experimental results show that LTPA provides similar complexity to that of the state-of-the-art, i.e., an additional cost of around only 1.04X for a 100,000-user FL system, with an additional long-term privacy guarantee.

References

[1]
Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, pp. 1–19, 2019.
[2]
X. Cao, M. Fang, J. Liu, and N. Z. Gong, “FLTrust: Byzantine-robust federated learning via trust bootstrapping,” in Proc. Netw. Distrib. Syst. Secur. Symp., 2021, pp. 1–18.
[3]
J. Cai, X. Liu, Z. Yu, K. Guo, and J. Li, “Efficient vertical federated learning method for ridge regression of large-scale samples,” IEEE Trans. Emerg. Topics Comput., early access, Oct. 26, 2022. 10.1109/TETC.2022.3215986.
[4]
B. Zhao, X. Liu, W.-N. Chen, and R. Deng, “CrowdFL: Privacy-preserving mobile crowdsensing system via federated learning,” IEEE Trans. Mobile Comput., early access, Mar. 8, 2022. 10.1109/TMC.2022.3157603.
[5]
K. Chenget al., “SecureBoost: A lossless federated learning framework,” IEEE Intell. Syst., vol. 36, no. 6, pp. 87–98, Nov. 2021.
[6]
H. Yang, J. Zhao, Z. Xiong, K.-Y. Lam, S. Sun, and L. Xiao, “Privacy-preserving federated learning for UAV-enabled networks: Learning-based joint scheduling and resource management,” IEEE J. Sel. Areas Commun., vol. 39, no. 10, pp. 3144–3159, Oct. 2021.
[7]
Y. Zhaoet al., “Local differential privacy-based federated learning for Internet of Things,” IEEE Internet Things J., vol. 8, no. 11, pp. 8836–8853, Mar. 2020.
[8]
Q. Tong, X. Li, Y. Miao, X. Liu, J. Weng, and R. H. Deng, “Privacy-preserving Boolean range query with temporal access control in mobile computing,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 5, pp. 5159–5172, May 2023.
[9]
L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” in Proc. Adv. Neural Inf. Process. Syst. 32, Annu. Conf. Neural Inf. Process. Syst. (NeurIPS), Vancouver, BC, Canada, H. M. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché-Buc, E. B. Fox, and R. Garnett, Eds., Dec. 2019, pp. 1–11.
[10]
J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting gradients—How easy is it to break privacy in federated learning?” in Proc. Adv. Neural Inf. Process. Syst. 33, Annu. Conf. Neural Inf. Process. Syst. (NeurIPS), H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., Dec. 2020, pp. 1–11.
[11]
C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Found. Trends Theor. Comput. Sci., vol. 9, nos. 3–4, pp. 211–407, 2014.
[12]
M. Yang, K. Lam, T. Zhu, and C. Tang, “SPoFC: A framework for stream data aggregation with local differential privacy,” Concurrency Comput., Pract. Exp., vol. 35, no. 5, Feb. 2022, Art. no.
[13]
O. Goldreich, “Secure multi-party computation,” Manuscript. Preliminary Version, vol. 78, no. 110, pp. 1–108, 1998.
[14]
Z. Liu, I. Tjuawinata, C. Xing, and K.-Y. Lam, “MPC-enabled privacy-preserving neural network training against malicious attack,” 2020, arXiv:2007.12557.
[15]
C. Gentry, “Fully homomorphic encryption using ideal lattices,” in Proc. 41st Annu. ACM Symp. Theory Comput. (STOC), M. Mitzenmacher, Ed., Bethesda, MD, USA, May 2009, pp. 169–178.
[16]
B. Zhao, J. Yuan, X. Liu, Y. Wu, H. Hwa Pang, and R. H. Deng, “SOCI: A toolkit for secure outsourced computation on integers,” IEEE Trans. Inf. Forensics Security, vol. 17, pp. 3637–3648, 2022.
[17]
K. Bonawitzet al., “Practical secure aggregation for privacy-preserving machine learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., Dallas, TX, USA, Oct. 2017, pp. 1175–1191.
[18]
J. So, B. Güler, and A. Salman Avestimehr, “Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning,” IEEE J. Sel. Areas Inf. Theory, vol. 2, no. 1, pp. 479–489, Mar. 2021.
[19]
J. H. Bell, K. A. Bonawitz, A. Gascón, T. Lepoint, and M. Raykova, “Secure single-server aggregation with (poly)logarithmic overhead,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., J. Ligatti, X. Ou, J. Katz, and G. Vigna, Eds., 2020, pp. 1253–1269.
[20]
J. So, R. E. Ali, B. Guler, J. Jiao, and S. Avestimehr, “Securing secure aggregation: Mitigating multi-round privacy leakage in federated learning,” in Proc. IACR Cryptol. ePrint Arch., 2021, pp. 1–20.
[21]
S. Savet al., “POSEIDON: Privacy-preserving federated neural network learning,” in Proc. 28th Annu. Netw. Distrib. Syst. Secur. Symp. (NDSS), 2021, pp. 1–24.
[22]
S. Wagh, S. Tople, F. Benhamouda, E. Kushilevitz, P. Mittal, and T. Rabin, “Falcon: Honest-majority maliciously secure framework for private deep learning,” Proc. Privacy Enhancing Technol., vol. 2021, no. 1, pp. 188–208, Jan. 2021.
[23]
H. Chaudhari, R. Rachuri, and A. Suresh, “Trident: Efficient 4 PC framework for privacy preserving machine learning,” in Proc. 27th Annu. Netw. Distrib. Syst. Secur. Symp. (NDSS), San Diego, CA, USA, Feb. 2020, pp. 1–26.
[24]
V. Chvátal, “The tail of the hypergeometric distribution,” Discrete Math., vol. 25, no. 3, pp. 285–287, 1979.
[25]
X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the convergence of FedAvg on non-IID data,” in Proc. 8th Int. Conf. Learn. Represent. (ICLR), Addis Ababa, Ethiopia, Apr. 2020, pp. 1–26.
[26]
Y. LeCunet al., “Handwritten digit recognition with a back-propagation network,” in Proc. Adv. Neural Inf. Process. Syst., vol. 2, Denver, CO, USA, Nov. 1989, D. S. Touretzky, Ed., 1989.
[27]
Y. LeCun, C. Cortes, and C. Burges. (2010). MNIST Handwritten Digit Database. [Online]. Available: https://yann.lecun.com/exdb/mnist
[28]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778.
[29]
A. Krizhevskyet al., “Learning multiple layers of features from tiny images,” Univ. Toronto, Toronto, Toronto, ON, Canada, Tech. Rep. TR-2009, 2009. Accessed: Apr. 16, 2023. [Online]. Available: https://www.cs.toronto.edu/kriz/learning-features-2009-TR.pdf
[30]
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2009, pp. 248–255.
[31]
N. Agarwal, P. Kairouz, and Z. Liu, “The Skellam mechanism for differentially private federated learning,” in Proc. Adv. Neural Inf. Process. Syst., vol. 34, 2021, pp. 1–13.
[32]
Y. Lu, X. Huang, Y. Dai, S. Maharjan, and Y. Zhang, “Differentially private asynchronous federated learning for mobile edge computing in urban informatics,” IEEE Trans. Ind. Informat., vol. 16, no. 3, pp. 2134–2143, Mar. 2020.
[33]
L. Trieu Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy-preserving deep learning via additively homomorphic encryption,” IEEE Trans. Inf. Forensics Security, vol. 13, no. 5, pp. 1333–1345, May 2018.
[34]
J. Guo, Z. Liu, K.-Y. Lam, J. Zhao, and Y. Chen, “Privacy-enhanced federated learning with weighted aggregation,” in Proc. Int. Symp. Secur. Privacy Social Netw. Big Data. Cham, Switzerland: Springer, 2021, pp. 93–109.
[35]
W. Zheng, R. A. Popa, J. E. Gonzalez, and I. Stoica, “Helen: Maliciously secure coopetitive learning for linear models,” in Proc. IEEE Symp. Secur. Privacy (SP), May 2019, pp. 724–738.
[36]
C. Chenet al., “When homomorphic encryption marries secret sharing: Secure large-scale sparse logistic regression and applications in risk control,” in Proc. 27th ACM SIGKDD Conf. Knowl. Discovery Data Mining, Aug. 2021, pp. 2652–2662.
[37]
L. Zhao, J. Jiang, B. Feng, Q. Wang, C. Shen, and Q. Li, “SEAR: Secure and efficient aggregation for Byzantine-robust federated learning,” IEEE Trans. Dependable Secure Comput., vol. 19, no. 5, pp. 3329–3342, Oct. 2022.
[38]
Y. Zhang, Z. Wang, J. Cao, R. Hou, and D. Meng, “ShuffleFL: Gradient-preserving federated learning using trusted execution environment,” in Proc. 18th ACM Int. Conf. Comput. Frontiers, May 2021, pp. 161–168.
[39]
A. Boutet, T. Lebrun, J. Aalmoes, and A. Baud, “MixNN: Protection of federated learning against inference attacks by mixing neural network layers,” 2021, arXiv:2109.12550.
[40]
B. Choi, J.-Y. Sohn, D.-J. Han, and J. Moon, “Communication-computation efficient secure aggregation for federated learning,” 2020, arXiv:2012.05433.
[41]
J. Soet al., “LightSecAgg: A lightweight and versatile design for secure aggregation in federated learning,” Proc. Mach. Learn. Syst., vol. 4, pp. 694–720, Apr. 2022.
[42]
Y. Zheng, S. Lai, Y. Liu, X. Yuan, X. Yi, and C. Wang, “Aggregation service for federated learning: An efficient, secure, and more resilient realization,” IEEE Trans. Dependable Secure Comput., vol. 20, no. 2, pp. 988–1001, Apr. 2022.
[43]
Z. Liu, J. Guo, K.-Y. Lam, and J. Zhao, “Efficient dropout-resilient aggregation for privacy-preserving machine learning,” IEEE Trans. Inf. Forensics Security, vol. 18, pp. 1839–1854, 2022.
[44]
S. Kadhe, N. Rajaraman, O. Ozan Koyluoglu, and K. Ramchandran, “FastSecAgg: Scalable secure aggregation for privacy-preserving federated learning,” 2020, arXiv:2009.11248.
[45]
J. So, R. E. Ali, B. Güler, and A. Salman Avestimehr, “Secure aggregation for buffered asynchronous federated learning,” 2021, arXiv:2110.02177.
[46]
J. Nguyenet al., “Federated learning with buffered asynchronous aggregation,” in Proc. Int. Conf. Artif. Intell. Statist., 2022, pp. 3581–3607.
[47]
J. So, B. Güler, and A. S. Avestimehr, “Byzantine-resilient secure federated learning,” IEEE J. Sel. Areas Commun., vol. 39, no. 7, pp. 2168–2181, Jul. 2020.
[48]
X. Guoet al., “VeriFL: Communication-efficient and fast verifiable aggregation for federated learning,” IEEE Trans. Inf. Forensics Security, vol. 16, pp. 1736–1751, 2021.
[49]
D. Pasquini, D. Francati, and G. Ateniese, “Eluding secure aggregation in federated learning via model inconsistency,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., Nov. 2022, pp. 2429–2443.
[50]
J. C. Zhao, A. Sharma, A. R. Elkordy, Y. H. Ezzeldin, S. Avestimehr, and S. Bagchi, “Secure aggregation in federated learning is not private: Leaking user data at large scale through model modification,” 2023, arXiv:2303.12233.
[51]
Z. Liu, J. Guo, W. Yang, J. Fan, K.-Y. Lam, and J. Zhao, “Privacy-preserving aggregation in federated learning: A survey,” IEEE Trans. Big Data, early access, Jul. 15, 2022. 10.1109/TBDATA.2022.3190835.
[52]
H. Fereidooniet al., “SAFELearn: Secure aggregation for private FEderated learning,” in Proc. IEEE Secur. Privacy Workshops (SPW), May 2021, pp. 56–62.
[53]
X. Yin, Y. Zhu, and J. Hu, “A comprehensive survey of privacy-preserving federated learning: A taxonomy, review, and future directions,” ACM Comput. Surv., vol. 54, no. 6, pp. 1–36, 2021.

Cited By

View all
  • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024
  • (2024)FedComm: A Privacy-Enhanced and Efficient Authentication Protocol for Federated Learning in Vehicular Ad-Hoc NetworksIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332474719(777-792)Online publication date: 1-Jan-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security  Volume 18, Issue
2023
4507 pages

Publisher

IEEE Press

Publication History

Published: 01 January 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 27 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024
  • (2024)FedComm: A Privacy-Enhanced and Efficient Authentication Protocol for Federated Learning in Vehicular Ad-Hoc NetworksIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332474719(777-792)Online publication date: 1-Jan-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media