Knowledge inheritance for pre-trained language models

Y Qin, Y Lin, J Yi, J Zhang, X Han, Z Zhang… - arXiv preprint arXiv …, 2021 - arxiv.org
arXiv preprint arXiv:2105.13880, 2021arxiv.org
Recent explorations of large-scale pre-trained language models (PLMs) have revealed the
power of PLMs with huge amounts of parameters, setting off a wave of training ever-larger
PLMs. However, it requires tremendous computational resources to train a large-scale PLM,
which may be practically unaffordable. In addition, existing large-scale PLMs are mainly
trained from scratch individually, ignoring that many well-trained PLMs are available. To this
end, we explore the question how could existing PLMs benefit training large-scale PLMs in …
Recent explorations of large-scale pre-trained language models (PLMs) have revealed the power of PLMs with huge amounts of parameters, setting off a wave of training ever-larger PLMs. However, it requires tremendous computational resources to train a large-scale PLM, which may be practically unaffordable. In addition, existing large-scale PLMs are mainly trained from scratch individually, ignoring that many well-trained PLMs are available. To this end, we explore the question how could existing PLMs benefit training large-scale PLMs in future. Specifically, we introduce a pre-training framework named "knowledge inheritance" (KI) and explore how could knowledge distillation serve as auxiliary supervision during pre-training to efficiently learn larger PLMs. Experimental results demonstrate the superiority of KI in training efficiency. We also conduct empirical analyses to explore the effects of teacher PLMs' pre-training settings, including model architecture, pre-training data, etc. Finally, we show that KI could be applied to domain adaptation and knowledge transfer.
arxiv.org
Showing the best result for this search. See all results