Nothing Special   »   [go: up one dir, main page]

AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning

Han Guo, Ramakanth Pasunuru, Mohit Bansal


Abstract
Multi-task learning (MTL) has achieved success over a wide range of problems, where the goal is to improve the performance of a primary task using a set of relevant auxiliary tasks. However, when the usefulness of the auxiliary tasks w.r.t. the primary task is not known a priori, the success of MTL models depends on the correct choice of these auxiliary tasks and also a balanced mixing ratio of these tasks during alternate training. These two problems could be resolved via manual intuition or hyper-parameter tuning over all combinatorial task choices, but this introduces inductive bias or is not scalable when the number of candidate auxiliary tasks is very large. To address these issues, we present AutoSeM, a two-stage MTL pipeline, where the first stage automatically selects the most useful auxiliary tasks via a Beta-Bernoulli multi-armed bandit with Thompson Sampling, and the second stage learns the training mixing ratio of these selected auxiliary tasks via a Gaussian Process based Bayesian optimization framework. We conduct several MTL experiments on the GLUE language understanding tasks, and show that our AutoSeM framework can successfully find relevant auxiliary tasks and automatically learn their mixing ratio, achieving significant performance boosts on several primary tasks. Finally, we present ablations for each stage of AutoSeM and analyze the learned auxiliary task choices.
Anthology ID:
N19-1355
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3520–3531
Language:
URL:
https://aclanthology.org/N19-1355
DOI:
10.18653/v1/N19-1355
Bibkey:
Cite (ACL):
Han Guo, Ramakanth Pasunuru, and Mohit Bansal. 2019. AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3520–3531, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning (Guo et al., NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1355.pdf
Software:
 N19-1355.Software.pdf
Data
GLUEQNLI