Nothing Special   »   [go: up one dir, main page]

Respond in my Language: Mitigating Language Inconsistency in Response Generation based on Large Language Models

Liang Zhang, Qin Jin, Haoyang Huang, Dongdong Zhang, Furu Wei


Abstract
Large Language Models (LLMs) show strong instruction understanding ability across multiple languages. However, they are easily biased towards English in instruction tuning, and generate English responses even given non-English instructions. In this paper, we investigate the language inconsistent generation problem in monolingual instruction tuning. We find that instruction tuning in English increases the models’ preference for English responses. It attaches higher probabilities to English responses than to responses in the same language as the instruction. Based on the findings, we alleviate the language inconsistent generation problem by counteracting the model preference for English responses in both the training and inference stages. Specifically, we propose Pseudo-Inconsistent Penalization (PIP) which prevents the model from generating English responses when given non-English language prompts during training, and Prior Enhanced Decoding (PED) which improves the language-consistent prior by leveraging the untuned base language model. Experimental results show that our two methods significantly improve the language consistency of the model without requiring any multilingual data. Our code, data, and models will be released.
Anthology ID:
2024.acl-long.229
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4177–4192
Language:
URL:
https://aclanthology.org/2024.acl-long.229
DOI:
10.18653/v1/2024.acl-long.229
Bibkey:
Cite (ACL):
Liang Zhang, Qin Jin, Haoyang Huang, Dongdong Zhang, and Furu Wei. 2024. Respond in my Language: Mitigating Language Inconsistency in Response Generation based on Large Language Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4177–4192, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Respond in my Language: Mitigating Language Inconsistency in Response Generation based on Large Language Models (Zhang et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.229.pdf