Leveraging implicit feedback from deployment data in dialogue

RY Pang, S Roller, K Cho, H He, J Weston - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2307.14117, 2023arxiv.org
We study improving social conversational agents by learning from natural dialogue between
users and a deployed model, without extra annotations. To implicitly measure the quality of a
machine-generated utterance, we leverage signals like user response length, sentiment and
reaction of the future human utterances in the collected dialogue episodes. Our experiments
use the publicly released deployment data from BlenderBot (Xu et al., 2023). Human
evaluation indicates improvements in our new models over baseline responses; however …
We study improving social conversational agents by learning from natural dialogue between users and a deployed model, without extra annotations. To implicitly measure the quality of a machine-generated utterance, we leverage signals like user response length, sentiment and reaction of the future human utterances in the collected dialogue episodes. Our experiments use the publicly released deployment data from BlenderBot (Xu et al., 2023). Human evaluation indicates improvements in our new models over baseline responses; however, we find that some proxy signals can lead to more generations with undesirable properties as well. For example, optimizing for conversation length can lead to more controversial or unfriendly generations compared to the baseline, whereas optimizing for positive sentiment or reaction can decrease these behaviors.
arxiv.org
Showing the best result for this search. See all results