Cold fusion: Collaborative descent for distributed multitask finetuning
arXiv preprint arXiv:2212.01378, 2022•arxiv.org
We propose a new paradigm to continually evolve pretrained models, denoted ColD Fusion.
It provides the benefits of multitask learning but leverages distributed computation with
limited communication and eliminates the need for shared data. Consequentially, ColD
Fusion can give rise to a synergistic loop, where finetuned models can be recycled to
continually improve the pretrained model they are based upon. We show that ColD Fusion
yields comparable benefits to multitask training by producing a model that (a) attains strong …
It provides the benefits of multitask learning but leverages distributed computation with
limited communication and eliminates the need for shared data. Consequentially, ColD
Fusion can give rise to a synergistic loop, where finetuned models can be recycled to
continually improve the pretrained model they are based upon. We show that ColD Fusion
yields comparable benefits to multitask training by producing a model that (a) attains strong …
We propose a new paradigm to continually evolve pretrained models, denoted ColD Fusion. It provides the benefits of multitask learning but leverages distributed computation with limited communication and eliminates the need for shared data. Consequentially, ColD Fusion can give rise to a synergistic loop, where finetuned models can be recycled to continually improve the pretrained model they are based upon. We show that ColD Fusion yields comparable benefits to multitask training by producing a model that (a) attains strong performance on all of the datasets it was trained on; and (b) is a better starting point for finetuning on unseen datasets. We show that ColD Fusion outperforms RoBERTa and even previous multitask models. Specifically, when training and testing on 35 diverse datasets, ColD Fusion-based model outperforms RoBERTa by 2.33 points on average without any changes to the architecture.
arxiv.org