Nothing Special   »   [go: up one dir, main page]

Self-Training with Differentiable Teacher

Simiao Zuo, Yue Yu, Chen Liang, Haoming Jiang, Siawpeng Er, Chao Zhang, Tuo Zhao, Hongyuan Zha


Abstract
Self-training achieves enormous success in various semi-supervised and weakly-supervised learning tasks. The method can be interpreted as a teacher-student framework, where the teacher generates pseudo-labels, and the student makes predictions. The two models are updated alternatingly. However, such a straightforward alternating update rule leads to training instability. This is because a small change in the teacher may result in a significant change in the student. To address this issue, we propose DRIFT, short for differentiable self-training, that treats teacher-student as a Stackelberg game. In this game, a leader is always in a more advantageous position than a follower. In self-training, the student contributes to the prediction performance, and the teacher controls the training process by generating pseudo-labels. Therefore, we treat the student as the leader and the teacher as the follower. The leader procures its advantage by acknowledging the follower’s strategy, which involves differentiable pseudo-labels and differentiable sample weights. Consequently, the leader-follower interaction can be effectively captured via Stackelberg gradient, obtained by differentiating the follower’s strategy. Experimental results on semi- and weakly-supervised classification and named entity recognition tasks show that our model outperforms existing approaches by large margins.
Anthology ID:
2022.findings-naacl.70
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
933–949
Language:
URL:
https://aclanthology.org/2022.findings-naacl.70
DOI:
10.18653/v1/2022.findings-naacl.70
Bibkey:
Cite (ACL):
Simiao Zuo, Yue Yu, Chen Liang, Haoming Jiang, Siawpeng Er, Chao Zhang, Tuo Zhao, and Hongyuan Zha. 2022. Self-Training with Differentiable Teacher. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 933–949, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Self-Training with Differentiable Teacher (Zuo et al., Findings 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.findings-naacl.70.pdf
Video:
 https://aclanthology.org/2022.findings-naacl.70.mp4
Data
AG NewsBC5CDRIMDb Movie Reviews