Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/2806416.2806553acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Collaborating between Local and Global Learning for Distributed Online Multiple Tasks

Published: 17 October 2015 Publication History

Abstract

This paper studies the novel learning scenarios of Distributed Online Multi-tasks (DOM), where the learning individuals with continuously arriving data are distributed separately and meanwhile they need to learn individual models collaboratively. It has three characteristics: distributed learning, online learning and multi-task learning. It is motivated by the emerging applications of wearable devices, which aim to provide intelligent monitoring services, such as health emergency alarming and movement recognition.
To the best of our knowledge, no previous work has been done for this kind of problems. Thus, in this paper a collaborative learning scheme is proposed for this problem. Specifically, it performs local learning and global learning alternately. First, each client performs online learning using the increasing data locally. Then, DOM switches to global learning on the server side when some condition is triggered by clients. Here, an asynchronous online multi-task learning method is proposed for global learning. In this step, only this client's model, which triggers the global learning, is updated with the support of the difficult local data instances and the other clients' models. The experiments from 4 applications show that the proposed method of global learning can improve local learning significantly. DOM framework is effective, since it can share knowledge among distributed tasks and obtain better models than learning them separately. It is also communication efficient, which only requires the clients send a small portion of raw data to the server.

References

[1]
R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR, 6:01, 2005.
[2]
A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In Advances in Neural Information Processing Systems (NIPS), pages 41--48, Vancouver, BC, Canada, 2007.
[3]
S. Bickel. Ecml-pkdd discovery challenge 2006 overview. In ECML-PKDD Discovery Challenge Workshop, pages 1--9, 2006.
[4]
J. Blitzer, M. Dredze, and F. Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, pages 440--447, Prague, Czech republic, 2007.
[5]
R. Caruana. Multitask learning. Machine learning, 28(1):41--75, 1997.
[6]
G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. Linear algorithms for online multitask classification. Journal of Machine Learning Research, 11:2901--2934, 2010.
[7]
G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. Linear algorithms for online multitask classification. Journal of Machine Learning Research, 11:2901--2934, 2010.
[8]
N. Cesa-Bianchi, A. Conconi, and C. Gentile. Second-order perceptron algorithm. Siam Journal on Computing, 34(3):640--668, 2005.
[9]
J. Chen, L. Tang, J. Liu, and J. Ye. A convex formulation for learning shared structures from multiple tasks. In ICML, pages 137--144, Montreal, QC, Canada, 2009.
[10]
J. Chen, J. Zhou, and J. Ye. Integrating low-rank and group-sparse structures for robust multi-task learning. In ACM SIGKDD, pages 42--50, San Diego, CA, United states, 2011.
[11]
T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. Zheng. Nus-wide: A real-world web image database from national university of singapore. In CIVR 2009 - Proceedings of the ACM International Conference on Image and Video Retrieval, pages 368--375, Santorini Island, Greece, 2009.
[12]
K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551--585, 2006.
[13]
K. Crammer, M. Dredze, and F. Pereira. Exact convex confidence-weighted learning. In NIPS, pages 345--352, Vancouver, BC, Canada, 2009.
[14]
K. Crammer, A. Kulesza, and M. Dredze. Adaptive regularization of weight vectors. Machine Learning, 91(2):155--187, 2013.
[15]
F. Dinuzzo, G. Pillonetto, and G. De Nicolao. Client-server multitask learning from distributed datasets. IEEE Transactions on Neural Networks, 22(2):290--303, 2011.
[16]
M. Dredze and K. Crammer. Online methods for multi-domain learning and adaptation. In EMNLP, pages 689--697, Honolulu, HI, United states, 2008.
[17]
M. Dredze, K. Crammer, and F. Pereira. Confidence-weighted linear classification. In ICML, pages 264--271, Helsinki, Finland, 2008.
[18]
T. Evgeniou and M. Pontil. Regularized multi-task learning. In ACM SIGKDD, pages 109--117, Seattle, WA, United states, 2004.
[19]
A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In NIPS, Vancouver, BC, Canada, 2010.
[20]
R. Jin, S. C. H. Hoi, and T. Yang. Online multiple kernel learning: Algorithms and mistake bounds. Algorithmic Learning Theory, 6331:390--404, 2010.
[21]
X. Jin, F. Zhuang, S. Wang, Q. He, and Z. Shi. Shared structure learning for multiple tasks with multiple views. In ECML PKDD 2013, volume 8189 LNAI, pages 353--368, Prague, Czech republic, 2013.
[22]
X. Jin, F. Zhuang, H. Xiong, C. Du, P. Luo, and Q. He. Multi-task multi-view learning for heterogeneous tasks. In CIKM, pages 441--450, New York, NY, USA, 2014. ACM.
[23]
B. Li, P. Zhao, S. C. H. Hoi, and V. Gopalkrishnan. Pamr: Passive aggressive mean reversion strategy for portfolio selection. Machine Learning, 87(2):221--258, 2012.
[24]
F.-F. Li and P. Perona. A bayesian hierarchical model for learning natural scene categories. In CVPR, volume II, pages 524--531, San Diego, CA, United states, 2005.
[25]
D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91--110, 2004.
[26]
F. Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 65(6):386--408, 1958.
[27]
P. Ruvolo and E. Eaton. Online multi-task learning via sparse dictionary optimization. In AAAI, volume 1, Quebec City, Canada, 2014.
[28]
A. Saha, P. Rai, S. Venkatasubramanian, and H. Daume. Online learning of multiple tasks and their relationships. In International Conference on Artificial Intelligence and Statistics, pages 643--651, 2011.
[29]
G. Skolidis and G. Sanguinetti. Bayesian multitask classification with gaussian process priors. IEEE Transactions on Neural Networks, 22(12):2011--2021, 2011.
[30]
X. Sun, H. Kashima, and N. Ueda. Large-scale personalized human activity recognition using online multitask learning. IEEE Transactions on Knowledge and Data Engineering, 25(11):2551--2563, 2013.
[31]
J. Wang, P. Zhao, and S. C. H. Hoi. Exact soft confidence-weighted learning. In ICML, volume 1, pages 121--128, Edinburgh, United kingdom, 2012.
[32]
S. Yu, V. Tresp, and K. Yu. Robust multi-task learning with t-processes. In Twenty-Fourth International Conference on Machine Learning (ICML), volume 227, pages 1103--1110, Corvalis, OR, United states, 2007.
[33]
L. Zhang, J. Yi, R. Jin, M. Lin, and X. He. Online kernel learning with a near optimal sparsity bound. In ICML, number PART 2, pages 1658--1666, Atlanta, GA, United states, 2013.
[34]
Y. Zhang and D.-Y. Yeung. A convex formulation for learning task relationships in multi-task learning. In UAI 2010, pages 733--742, Catalina Island, CA, United states, 2010.
[35]
Y. Zhang and D.-Y. Yeung. Multi-task learning in heterogeneous feature spaces. In AAAI, volume 1, pages 574--579, San Francisco, CA, United states, 2011.
[36]
P. Zhao, S. C. H. Hoi, and R. Jin. Double updating online learning. Journal of Machine Learning Research (JMLR), 12:1587--1615, 2011.
[37]
P. Zhao, S. C. H. Hoi, R. Jin, and T. Yang. Online auc maximization. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, pages 233--240, Bellevue, WA, United states, 2011.

Cited By

View all
  • (2024)Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial EnvironmentIEEE Transactions on Signal Processing10.1109/TSP.2023.334002872(235-248)Online publication date: 2024
  • (2024)AoU-Based Local Update and User Scheduling for Semi-Asynchronous Online Federated Learning in Wireless NetworksIEEE Internet of Things Journal10.1109/JIOT.2024.339940411:18(29673-29688)Online publication date: 15-Sep-2024
  • (2024)Asynchronous Federated and Reinforcement Learning for Mobility-Aware Edge Caching in IoVIEEE Internet of Things Journal10.1109/JIOT.2023.334925511:9(15334-15347)Online publication date: 1-May-2024
  • Show More Cited By

Index Terms

  1. Collaborating between Local and Global Learning for Distributed Online Multiple Tasks

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CIKM '15: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management
    October 2015
    1998 pages
    ISBN:9781450337946
    DOI:10.1145/2806416
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 17 October 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. distributed tasks
    2. multi-task learning
    3. online learning

    Qualifiers

    • Research-article

    Conference

    CIKM'15
    Sponsor:

    Acceptance Rates

    CIKM '15 Paper Acceptance Rate 165 of 646 submissions, 26%;
    Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

    Upcoming Conference

    CIKM '25

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)11
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 16 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Byzantine-Robust Distributed Online Learning: Taming Adversarial Participants in An Adversarial EnvironmentIEEE Transactions on Signal Processing10.1109/TSP.2023.334002872(235-248)Online publication date: 2024
    • (2024)AoU-Based Local Update and User Scheduling for Semi-Asynchronous Online Federated Learning in Wireless NetworksIEEE Internet of Things Journal10.1109/JIOT.2024.339940411:18(29673-29688)Online publication date: 15-Sep-2024
    • (2024)Asynchronous Federated and Reinforcement Learning for Mobility-Aware Edge Caching in IoVIEEE Internet of Things Journal10.1109/JIOT.2023.334925511:9(15334-15347)Online publication date: 1-May-2024
    • (2023)Distributed Online Learning With Adversarial Participants In An Adversarial EnvironmentICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)10.1109/ICASSP49357.2023.10095178(1-5)Online publication date: 4-Jun-2023
    • (2023)Distributed online multi‐task sparse identification for multiple systems with asynchronous updatesInternational Journal of Robust and Nonlinear Control10.1002/rnc.694233:18(11242-11256)Online publication date: 23-Aug-2023
    • (2022)Federated Multitask Learning for HyperFaceIEEE Transactions on Artificial Intelligence10.1109/TAI.2021.31338163:5(788-797)Online publication date: Oct-2022
    • (2020)Asynchronous Online Federated Learning for Edge Devices with Non-IID Data2020 IEEE International Conference on Big Data (Big Data)10.1109/BigData50022.2020.9378161(15-24)Online publication date: 10-Dec-2020
    • (2019)Collaborative Learning Through Shared Collective Knowledge and Local Expertise2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP)10.1109/MLSP.2019.8918888(1-6)Online publication date: Oct-2019
    • (2018)Multi-Agent Distributed Lifelong Learning for Collective Knowledge AcquisitionProceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems10.5555/3237383.3237489(712-720)Online publication date: 9-Jul-2018
    • (2018)Preserving Model Privacy for Machine Learning in Distributed SystemsIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2018.280962429:8(1808-1822)Online publication date: 1-Aug-2018
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media