Abstract
We propose a framework for fully decentralised machine learning and apply it to latent factor models for top-N recommendation. The training data in a decentralised learning setting is distributed across multiple agents, who jointly optimise a common global objective function (the loss function). Here, in contrast to the client-server architecture of federated learning, the agents communicate directly, maintaining and updating their own model parameters, without central aggregation and without sharing their own data. This framework involves two key contributions. Firstly, we propose a method to extend a global loss function to a distributed loss function over the distributed parameters of the decentralised system; secondly, we show how this distributed loss function can be optimised using an algorithm that operates in two phases. In the learning phase, a large number of steps of local learning are carried out by each agent without communication. In a following sharing phase, neighbouring agents exchange messages that enable a batch update of local parameters. Thus, unlike other decentralised algorithms that require some inter-agent communication after one (or a few) model updates, our algorithm significantly reduces the number of messages that need to be exchanged during learning. We prove the convergence of our framework and demonstrate its effectiveness using both the Weighted Matrix Factorisation and Bayesian Personalised Ranking latent factor recommender models. We demonstrate empirically the performance of our approach on a number of different recommender system datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
We ignore bias terms for simplicity, though these can easily be incorporated into the framework.
- 2.
SGD schemes for wmf typically sample negative items (\(i \notin R_u\)) at a different rate to positive samples. In this case, the stochastic gradients are unbiased gradient estimators of a loss function where the confidence term is \(c_{ui}=\pi _i(1+\alpha r_{ui})\), where \(\pi _i\) is the probability of sampling item i.
- 3.
Other accuracy measures follow the trends we see for prec@10, achieving the well-known scores of the central algorithm when the decentralised algorithm converges, see e.g. www.librec.net/release/v1.3/example.html.
References
Assran, M., Loizou, N., Ballas, N., Rabbat, M.: Stochastic gradient push for distributed deep learning. In: Proceedings of the ICML, Long Beach, California (2019)
Bellet, A., Guerraoui, R., Taziki, M., Tommasi, M.: Personalized and private peer-to-peer machine learning. In: Proceedings of the 21st AISTATS, Lanzarote (2017)
Chen, C., Liu, Z., Zhao, P., Zhou, J., Li, X.: Privacy preserving point-of-interest recommendation using decentralized matrix factorization. In: AAAI (2018)
Colin, I., Bellet, A., Salmon, J., Clémençon, S.: Gossip dual averaging for decentralized optimization of pairwise functions. In: Proceedings of 33rd ICML (2016)
Ammad-ud din, M., Ivannikova, E., Khan, S.A., Oyomno, W., Fu, Q., Tan, K.E., Flanagan, A.: Federated collaborative filtering for privacy-preserving personalized recommendation system. arXiv preprint arXiv:1901.09888 (2019)
Duriakova, E., Tragos, E.Z., Smyth, B., Hurley, N., Pena, F.J., Symeonidis, P., Geraci, J., Lawlor, A.: PDMFRec: a decentralised matrix factorisation with tunable user-centric privacy. In: Proceedings of the 13th RecSys. ACM (2019)
He, C., Tan, C., Tang, H., Qiu, S., Liu, J.: Central server free federated learning over single-sided trust social networks. arXiv preprint arXiv:1910.04956 (2019)
Hegedűs, I., Danner, G., Jelasity, M.: Decentralized recommendation based on matrix factorization: a comparison of gossip and federated learning. In: Cellier, P., Driessens, K. (eds.) ECML PKDD 2019, Part I. CCIS, vol. 1167, pp. 317–332. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43823-4_27
Hu, Y., Koren, Y., Volinsky, C.: Collaborative filtering for implicit feedback datasets. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 263–272, December 2008
Jalalirad, A., Scavuzzo, M., Capota, C., Sprague, M.: A simple and efficient federated recommender system. In: Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, pp. 53–58. ACM, New York (2019)
Kairouz, P., et al.: Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977 (2019)
Kempe, D., Dobra, A., Gehrke, J.: Gossip-based computation of aggregate information. In: Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2003, p. 482. IEEE Computer Society, USA (2003)
Koloskova, A., Stich, S.U., Jaggi, M.: Decentralized stochastic optimization and gossip algorithms with compressed communication. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, California (2019)
Konecný, J., McMahan, H.B., Ramage, D.: Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575 (2015)
Lian, X., Zhang, C., Zhang, H., Hsieh, C.J., Zhang, W., Liu, J.: Can decentralized algorithms outperform centralized algorithms? A case study for decentralized parallel stochastic gradient descent. Adv. Neural Inf. Process. Syst. 30, 5330–5340 (2017)
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273–1282 (2017)
Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)
Nedić, A., Olshevsky, A., Rabbat, M.G.: Network topology and communication-computation tradeoffs in decentralized optimization. Proc. IEEE 106(5), 953–976 (2018)
Papaioannou, T.G., Ranvier, J.E., Olteanu, A., Aberer, K.: A decentralized recommender system for effective web credibility assessment. In: Proceedings of the 21st ACM International Conference on Information and Knowledge Management. pp. 704–713. ACM (2012)
Rendle, S., Freudenthaler, C., Gantner, Z., Schmidt-Thieme, L.: BPR: Bayesian personalized ranking from implicit feedback. In: Proceedings of the 25th Conference on Uncertainty in AI, pp. 452–461. AUAI Press, Arlington, Virginia, USA (2009)
Tang, H., Lian, X., Yan, M., Zhang, C., Liu, J.: D\(^2\): decentralized training over decentralized data. In: International Conference on Machine Learning, pp. 4848–4856 (2018)
Tsitsiklis, J., Bertsekas, D., Athans, M.: Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Trans. Autom. Control 31(9), 803–812 (1986)
Vanhaesebrouck, P., Bellet, A., Tommasi, M.: Decentralized collaborative learning of personalized models over networks. In: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) (2017)
Yuan, K., Ling, Q., Yin, W.: On the convergence of decentralized gradient descent. SIAM J. Optim. 26(3), 1835–1854 (2016)
Acknowledgements
The work is supported by the Science Foundation Ireland under the grant number SFI/12/RC/2289_P2 and Samsung Research, Samsung Electronics Co., Seoul, Republic of Korea. We wish to thank the reviewers for the helpful feedback.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Duriakova, E. et al. (2021). An Algorithmic Framework for Decentralised Matrix Factorisation. In: Hutter, F., Kersting, K., Lijffijt, J., Valera, I. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2020. Lecture Notes in Computer Science(), vol 12458. Springer, Cham. https://doi.org/10.1007/978-3-030-67661-2_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-67661-2_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-67660-5
Online ISBN: 978-3-030-67661-2
eBook Packages: Computer ScienceComputer Science (R0)