Nothing Special   »   [go: up one dir, main page]

COLD FUSION: TRAINING SEQ2SEQ MODELS TOGETHER WITH LANGUAGE MODELSDownload PDF

12 Feb 2018 (modified: 15 Feb 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: Sequence-to-Sequence Models, Speech Recognition, Language Models
TL;DR: We introduce a novel method to train Seq2Seq models with language models that converge faster, generalize better and can almost completely transfer to a new domain using less than 10% of labeled data.
Abstract: Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition. Performance has further been improved by leveraging unlabeled data, often in the form of a language model. In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training, and show its effectiveness on the speech recognition task. We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying i) faster convergence and better generalization, and ii) almost complete transfer to a new domain while using less than 10% of the labeled training data.
1 Reply

Loading