Nothing Special   »   [go: up one dir, main page]

GPST

Generative Pre-trained Speech Language Model with Efficient Hierarchical Transformer

|Paper| |GitHub|

Abstract. While recent advancements in speech language models have achieved significant progress, they face remarkable challenges in modeling the long acoustic sequences of neural audio codecs. In this paper, we introduce Generative Pre-trained Speech Transformer (GPST), a hierarchical transformer designed for efficient speech language modeling. GPST quantizes audio waveforms into two distinct types of discrete speech representations and integrates them within a hierarchical transformer architecture, allowing for a unified one-stage generation process and enhancing Hi-Res audio generation capabilities. By training on large corpora of speeches in an end-to-end unsupervised manner, GPST can generate syntactically consistent speech with diverse speaker identities. Given a brief 3-second prompt, GPST can produce natural and coherent personalized speech, demonstrating in-context learning abilities. Moreover, our approach can be easily extended to spoken cross-lingual speech generation by incorporating multi-lingual semantic tokens and universal acoustic tokens. Experimental results indicate that GPST significantly outperforms the existing speech language models in terms of word error rate, speech quality, and speaker similarity.

Semantic to Acoustic


In this setting, we use the ground-truth semantic tokens as condition for acoustic generation, which is similar to the task of TTS. The generated speech preserves the content of the spoken sentence while varying in speaker identity. We also train a toy decoder-only transformer named GPST-TTS on the LibriSpeech 960h dataset to generate semantic tokens with text as condition, supporting the TTS task.

Original GPST-TTS GPST

Speaker Identity Transfer


In this setting, we are interested in the task of voice conversion that transfers the speaker identity of the prompt speech into the target speech. GPST is encouraged to generate subsequent acoustic tokens that share the speaker identity with acoustic prompt while remaining consistent with the content of semantic tokens. We find that directly concatenating would cause unstable generation around the interface boundary. To address this issue, we propose artificially inserting a very short silence excerpt (0.1 second) to explicitly break the linguistic continuation. In this way, the model would not struggle to mitigate the discontinuity is able to generate stable speeches.

Original Prompt GPST

Unconditional Generation


In this setting, we unconditionally generate the semantic tokens, which are subsequently used as the condition for acoustic generation. The randomly sampled semantic sequence can generate diverse, syntactically and semantically consistent linguistic content. The acoustic tokens vary in speaker identity, prosody with the semantic content serving as a guideline.

GPST