Nothing Special   »   [go: up one dir, main page]

Prototypical Verbalizer for Prompt-based Few-shot Tuning

Ganqu Cui, Shengding Hu, Ning Ding, Longtao Huang, Zhiyuan Liu


Abstract
Prompt-based tuning for pre-trained language models (PLMs) has shown its effectiveness in few-shot learning. Typically, prompt-based tuning wraps the input text into a cloze question. To make predictions, the model maps the output words to labels via a verbalizer, which is either manually designed or automatically built. However, manual verbalizers heavily depend on domain-specific prior knowledge and human efforts, while finding appropriate label words automatically still remains challenging. In this work, we propose the prototypical verbalizer (ProtoVerb) which is built directly from training data. Specifically, ProtoVerb learns prototype vectors as verbalizers by contrastive learning. In this way, the prototypes summarize training instances and are able to enclose rich class-level semantics. We conduct experiments on both topic classification and entity typing tasks, and the results demonstrate that ProtoVerb significantly outperforms current automatic verbalizers, especially when training data is extremely scarce. More surprisingly, ProtoVerb consistently boosts prompt-based tuning even on untuned PLMs, indicating an elegant non-tuning way to utilize PLMs. Our codes are avaliable at https://github.com/thunlp/OpenPrompt.
Anthology ID:
2022.acl-long.483
Volume:
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
May
Year:
2022
Address:
Dublin, Ireland
Editors:
Smaranda Muresan, Preslav Nakov, Aline Villavicencio
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7014–7024
Language:
URL:
https://aclanthology.org/2022.acl-long.483
DOI:
10.18653/v1/2022.acl-long.483
Bibkey:
Cite (ACL):
Ganqu Cui, Shengding Hu, Ning Ding, Longtao Huang, and Zhiyuan Liu. 2022. Prototypical Verbalizer for Prompt-based Few-shot Tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7014–7024, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Prototypical Verbalizer for Prompt-based Few-shot Tuning (Cui et al., ACL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.acl-long.483.pdf
Code
 thunlp/OpenPrompt
Data
Few-NERD