Nothing Special   »   [go: up one dir, main page]

Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer

Yongxin Zhu, Dan Su, Liqiang He, Linli Xu, Dong Yu


Abstract
While recent advancements in speech language models have achieved significant progress, they face remarkable challenges in modeling the long acoustic sequences of neural audio codecs. In this paper, we introduce Generative Pre-trained Speech Transformer (GPST), a hierarchical transformer designed for efficient speech language modeling. GPST quantizes audio waveforms into two distinct types of discrete speech representations and integrates them within a hierarchical transformer architecture, allowing for a unified one-stage generation process and enhancing Hi-Res audio generation capabilities. By training on large corpora of speeches in an end-to-end unsupervised manner, GPST can generate syntactically consistent speech with diverse speaker identities. Given a brief 3-second prompt, GPST can produce natural and coherent personalized speech, demonstrating in-context learning abilities. Moreover, our approach can be easily extended to spoken cross-lingual speech generation by incorporating multi-lingual semantic tokens and universal acoustic tokens. Experimental results indicate that GPST significantly outperforms the existing speech language models in terms of word error rate, speech quality, and speaker similarity. See https://youngsheen.github.io/GPST/demo for demo samples.
Anthology ID:
2024.acl-long.97
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1764–1775
Language:
URL:
https://aclanthology.org/2024.acl-long.97
DOI:
10.18653/v1/2024.acl-long.97
Bibkey:
Cite (ACL):
Yongxin Zhu, Dan Su, Liqiang He, Linli Xu, and Dong Yu. 2024. Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1764–1775, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer (Zhu et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.97.pdf