Nothing Special   »   [go: up one dir, main page]

DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization

Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, Dan Roth


Abstract
Large-scale pre-trained sequence-to-sequence models like BART and T5 achieve state-of-the-art performance on many generative NLP tasks. However, such models pose a great challenge in resource-constrained scenarios owing to their large memory requirements and high latency. To alleviate this issue, we propose to jointly distill and quantize the model, where knowledge is transferred from the full-precision teacher model to the quantized and distilled low-precision student model. Empirical analyses show that, despite the challenging nature of generative tasks, we were able to achieve a 16.5x model footprint compression ratio with little performance drop relative to the full-precision counterparts on multiple summarization and QA datasets. We further pushed the limit of compression ratio to 27.7x and presented the performance-efficiency trade-off for generative tasks using pre-trained models. To the best of our knowledge, this is the first work aiming to effectively distill and quantize sequence-to-sequence pre-trained models for language generation tasks.
Anthology ID:
2022.acl-short.22
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
203–211
Language:
URL:
https://aclanthology.org/2022.acl-short.22
DOI:
10.18653/v1/2022.acl-short.22
Bibkey:
Cite (ACL):
Zheng Li, Zijian Wang, Ming Tan, Ramesh Nallapati, Parminder Bhatia, Andrew Arnold, Bing Xiang, and Dan Roth. 2022. DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 203–211, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (Li et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-short.22.pdf
Software:
 2022.acl-short.22.software.zip
Code
 amazon-research/dq-bart +  additional community code
Data
CNN/Daily MailELI5WMT 2016XSum