CN117894294B - Personification auxiliary language voice synthesis method and system - Google Patents
Personification auxiliary language voice synthesis method and system Download PDFInfo
- Publication number
- CN117894294B CN117894294B CN202410288143.0A CN202410288143A CN117894294B CN 117894294 B CN117894294 B CN 117894294B CN 202410288143 A CN202410288143 A CN 202410288143A CN 117894294 B CN117894294 B CN 117894294B
- Authority
- CN
- China
- Prior art keywords
- language
- voice
- audio
- tone
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001308 synthesis method Methods 0.000 title claims abstract description 11
- 238000000034 method Methods 0.000 claims abstract description 36
- 238000002372 labelling Methods 0.000 claims abstract description 19
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 13
- 238000006243 chemical reaction Methods 0.000 claims description 59
- 238000012549 training Methods 0.000 claims description 37
- 230000015572 biosynthetic process Effects 0.000 claims description 30
- 238000003786 synthesis reaction Methods 0.000 claims description 30
- 238000009499 grossing Methods 0.000 claims description 22
- 239000003086 colorant Substances 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 8
- 230000003993 interaction Effects 0.000 abstract description 3
- 238000005457 optimization Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000012634 fragment Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/033—Voice editing, e.g. manipulating the voice of the synthesiser
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Machine Translation (AREA)
Abstract
The invention provides a personification auxiliary language voice synthesis method and a system, which are used for labeling auxiliary language tags on original tone voice data containing auxiliary languages, and acquiring an auxiliary language pronunciation unit with target tone according to the labeled original tone voice data and by combining reference audio of the target tone; receiving language input text, wherein the language input text comprises TTS text and a secondary language label marked at a corresponding position in the TTS text; and synthesizing the TTS text into target tone TTS voice, selecting a corresponding secondary language pronunciation unit with the target tone according to the secondary language label, and splicing the target tone TTS voice to generate audio with the target tone. The invention can realize that the speaker in the voice library has the pronunciation capability of the auxiliary language with low cost, improves the naturalness and the fidelity of the TTS speaker in the dialogue process, and ensures that the AI communicates with zero distance in the man-machine interaction.
Description
Technical Field
The invention belongs to the technical field of voice processing, and relates to a personification auxiliary language voice synthesis method and system.
Background
The current voice synthesis technology can synthesize high-naturalness high-tone quality audio, and can meet more demands in life application, such as video dubbing, broadcasting and the like. However, compared with the situation that a real person pronounces a sound or has a small distance, especially in a conversation scene, the real person conversation can use different pauses and hesitations of 'people, emmm, singults' To think about the content of the next sentence, or make some laughter or other non-language sounds such as exhalations To express the state of the current speaker, the pronunciation without actual semantics is called a side language phenomenon in the phonetics, the existing TTS model (Text To Speech) can always speak smoothly with the same Speech speed, the whole conversation process is relatively mechanical and stiff, and the lack of the side language makes the TTS difficult To achieve the anthropomorphic effect in the conversation process.
TTS with a secondary language is not studied much at present, and there are few related products on the market. In general, to achieve the pronunciation of the secondary language, additional secondary language labeling is needed, namely, firstly, defining the secondary language label, designing the scene and the text corresponding to the scene, performing sound deduction on sound best according to the text and the label in the text, and finally, training the TTS model according to the customized data so as to have the capability of pronunciation of the secondary language. This mode is theoretically possible and has some problems. First, the customization cost is more expensive than general TTS data, and the data recording period is longer; secondly, whether the secondary language can be migrated is still to be verified, namely, the third party of the secondary language can be recorded, and the fourth party of the speaker is required to have the pronunciation capability of the secondary language.
Therefore, how to provide a highly personified and generalizable method and system for synthesizing a speech in a secondary language is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of this, the present invention provides a personified method and system for synthesizing a sub-language speech, which can realize that a speaker in a speech library has a sub-language pronunciation capability at low cost, and improve naturalness and realism of TTS speakers in a conversation process, so that AI communicates with zero distance in man-machine interaction.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
the invention discloses a personification auxiliary language voice synthesis method, which comprises the following steps:
S1: labeling the auxiliary language label on the original tone voice data containing the auxiliary language, and acquiring an auxiliary language pronunciation unit with a target tone according to the labeled original tone voice data and the reference audio of the target tone;
S2: receiving language input text, wherein the language input text comprises TTS text and a secondary language label marked at a corresponding position in the TTS text; and synthesizing the TTS text into target tone TTS voice, selecting a corresponding secondary language pronunciation unit with the target tone according to the secondary language label, and splicing the secondary language pronunciation unit with the target tone TTS voice to generate audio with the target tone.
It should be noted that, in this embodiment, the original tone color voice data includes one or more tone colors, and at least one tone color includes a sub-language.
Preferably, the S1 includes:
s11: a voice recognition step: labeling the auxiliary language label on the original tone color voice data containing the auxiliary language, and performing voice recognition on the labeled original tone color voice data to extract PPG features and fundamental frequency features;
S12: a voice conversion step: and performing content coding on the PPG characteristics, performing intonation coding on the fundamental frequency characteristics, performing tone coding on the reference audio of the target tone, and uniformly decoding the coding result to obtain the secondary language pronunciation unit with the target tone.
Preferably, the method further comprises a training step of a speech recognition model, wherein the speech recognition model is used for executing the speech recognition step:
collecting a voice dialogue data set containing a secondary language, and labeling the secondary language in the voice dialogue data set with a secondary language label to obtain a labeled voice dialogue data set;
constructing wenet a model, the wenet model comprising Conformer Encoder for receiving input audio and outputting PPG features of the input audio;
pre-training the wenet model using a chinese speech recognition dataset;
And fine-tuning the weight of the wenet model after pre-training by using the labeled voice dialogue data set to obtain a trained voice recognition model.
Preferably, the method further comprises a training step of a speech conversion model, wherein the speech conversion model is used for executing the speech conversion step:
pre-training the speech conversion model using the audio of the chinese speech recognition dataset and PPG features and fundamental frequency features of the corresponding audio;
And fine tuning the weight of the pre-trained voice conversion model by using the audio of the target tone, the audio of the marked voice dialogue data set, and the PPG characteristic and the fundamental frequency characteristic of the corresponding audio to obtain the trained voice conversion model.
Preferably, the step S12 further includes, after uniformly decoding the encoding result, obtaining target tone audio data corresponding to the original tone voice data content, and intercepting target tone sub-language field audio data, that is, a sub-language pronunciation unit with target tone, by combining the position of the sub-language tag.
Preferably, storing the secondary language pronunciation unit with the target tone corresponding to the secondary language label acquired in the step S1 to obtain a secondary language database; and S2, searching the corresponding secondary language pronunciation unit with the target tone from the secondary language database according to the secondary language label.
Preferably, the step of splicing the target timbre TTS voice with the secondary language pronunciation unit with the target timbre in S2 includes:
Constructing a voice smooth model, and training an autoregressive model by using the audio frequency of the target tone as a training set, wherein the autoregressive model comprises a decoder and predicts a next voice frame by using a previous voice frame;
And carrying out voice smoothing on the auxiliary language pronunciation unit with the target tone and the voice smoothing model of the target tone TTS voice input after training, and outputting the audio with the target tone.
The invention also discloses a personified auxiliary language voice synthesis system according to the personified auxiliary language voice synthesis method, which comprises the following steps:
The auxiliary language unit extraction subsystem is used for labeling auxiliary language labels on the original tone voice data containing the auxiliary language, and acquiring an auxiliary language pronunciation unit with a target tone according to the labeled original tone voice data and the reference audio of the target tone;
The system comprises a sub-language synthesis subsystem, a sub-language synthesis subsystem and a sub-language synthesis subsystem, wherein the sub-language synthesis subsystem is used for receiving language input texts, and the language input texts comprise TTS texts and sub-language labels marked at corresponding positions in the TTS texts; and synthesizing the TTS text into target tone TTS voice, selecting a corresponding secondary language pronunciation unit with the target tone according to the secondary language label, and splicing the secondary language pronunciation unit with the target tone TTS voice to generate audio with the target tone.
Preferably, the secondary language unit extraction subsystem includes:
The voice recognition module is used for labeling the auxiliary language tag on the original tone voice data containing the auxiliary language, and performing voice recognition on the labeled original tone voice data to extract PPG features and fundamental frequency features;
and the voice conversion module is used for carrying out content coding on the PPG characteristics, carrying out intonation coding on the fundamental frequency characteristics, carrying out tone coding on the reference audio of the target tone, and uniformly decoding the coding result to obtain the auxiliary language pronunciation unit with the target tone.
Preferably, the speech recognition module comprises a speech recognition model, and the speech recognition model is trained according to the following steps:
collecting a voice dialogue data set containing a secondary language, and labeling the secondary language in the voice dialogue data set with a secondary language label to obtain a labeled voice dialogue data set;
constructing wenet a model, the wenet model comprising Conformer Encoder for receiving input audio and outputting PPG features of the input audio;
pre-training the wenet model using a chinese speech recognition dataset;
And fine-tuning the weight of the wenet model after pre-training by using the labeled voice dialogue data set to obtain a trained voice recognition model.
Preferably, the voice conversion module comprises a voice conversion model, and the voice conversion model is trained according to the following steps:
pre-training the speech conversion model using the audio of the chinese speech recognition dataset and PPG features and fundamental frequency features of the corresponding audio;
And fine tuning the weight of the pre-trained voice conversion model by using the audio of the target tone, the audio of the marked voice dialogue data set, and the PPG characteristic and the fundamental frequency characteristic of the corresponding audio to obtain the trained voice conversion model.
Preferably, the method further comprises a secondary language pronunciation unit interception module, configured to perform an interception operation on target tone audio data corresponding to original tone voice data content obtained by uniformly decoding the encoding result, where the method comprises: and intercepting the audio data of the target tone sub-language field by combining the position of the sub-language tag, namely a sub-language pronunciation unit with the target tone.
Preferably, the method further comprises a secondary language database for storing secondary language pronunciation units with target timbres, which are different from the corresponding secondary language labels; and the secondary language synthesis subsystem retrieves the corresponding secondary language pronunciation unit with the target tone from the secondary language database according to the secondary language label.
Preferably, the system further comprises a voice smoothing module, wherein a voice smoothing model is built in the voice smoothing module, the voice smoothing model comprises an autoregressive model trained by using the audio of the target tone as a training set, the autoregressive model comprises a decoder, and the next voice frame is predicted by using the previous voice frame;
The voice smoothing mode is used for receiving input auxiliary language pronunciation units with target tone colors and performing voice smoothing on the target tone color TTS voice and outputting audio with the target tone colors.
Compared with the prior art, the invention has the following gain effects:
The method and the system for synthesizing the auxiliary language voice can realize the capability of the target speaker for auxiliary language pronunciation with extremely low cost, on one hand, the method has high expansibility and is suitable for any new speaker in storage, and on the other hand, as the conversion model and the smooth model are adopted, the tone and the naturalness of the auxiliary language pronunciation unit and the TTS synthesized audio can be perfectly received, the pronunciation with high anthropomorphic degree of a TTS system is realized, and more immersive dialogue in a man-machine interaction scene is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method of personified secondary language speech synthesis in an embodiment of the present invention;
FIG. 2 is a schematic diagram of the operation of the sub-linguistic unit extraction subsystem in an embodiment of the present invention;
FIG. 3 is a flow chart of a sub-language optimization conversion in an embodiment of the invention;
FIG. 4 is a schematic diagram of a speech recognition model in an embodiment of the invention;
FIG. 5 is a schematic diagram of a speech conversion model in an embodiment of the invention;
FIG. 6 is a schematic diagram of the operation of the sub-language synthesis subsystem in an embodiment of the invention;
FIG. 7 is a schematic diagram of a speech smoothing model based on an autoregressive model in an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The first aspect of the embodiment of the invention provides a personified auxiliary language voice synthesis method. As shown in fig. 1, the method comprises the following steps:
S1: sub-language unit extraction sub-flow: and labeling the auxiliary language label on the original tone voice data containing the auxiliary language, and acquiring the auxiliary language pronunciation unit with the target tone according to the labeled original tone voice data and the reference audio of the target tone.
S2: sub-language synthesis sub-flow: receiving language input text, wherein the language input text comprises TTS text and a secondary language label marked at a corresponding position in the TTS text; and synthesizing the TTS text into target tone TTS voice, selecting a corresponding secondary language pronunciation unit with the target tone according to the secondary language label, and splicing the target tone TTS voice to generate audio with the target tone.
In one embodiment, obtaining a secondary language speech unit with a target timbre is one of the core flows of the overall secondary language speech synthesis system, so that the uniformity of the secondary language speech segments and the audio timbre synthesized by the TTS model can be ensured. Therefore, the voices with the sub-languages in the data set are required to be converted into the target speaker so as to have the tone color of the target speaker. However, the direct conversion effect is not ideal, because one of the cores of the voice conversion is the extraction of content in voice recognition, the current voice recognition model does not support recognition of a secondary language, and the secondary language conversion often has the problem of wrong pronunciation content during conversion. So supporting speech conversion in the secondary language is one of the key technologies for realizing speech synthesis in the secondary language. As shown in FIG. 2, the invention uses the auxiliary language tag as a recognition unit for voice recognition, fine-tunes a voice recognition model based on the labeled voice data with the auxiliary language tag, and converts an optimization model based on features generated by optimized voice recognition. The core module of the sub-language unit conversion technology and the optimization mode thereof are described in detail below.
As shown in fig. 3, the sub-language conversion optimization process in S1 includes:
s11: a voice recognition step: labeling the auxiliary language label on the original tone color voice data containing the auxiliary language, and performing voice recognition on the labeled original tone color voice data to extract PPG features and fundamental frequency features;
S12: a voice conversion step: and performing content coding on the PPG characteristic, performing intonation coding on the fundamental frequency characteristic, performing tone coding on the reference audio of the target tone, and uniformly decoding the coding result to obtain the auxiliary language pronunciation unit with the target tone.
The training process of the conversion optimization model of this embodiment aims to obtain a secondary language pronunciation unit with a target tone from the voice dialogue data set for the subsequent secondary language synthesis subsystem to splice and synthesize TTS text.
The specific implementation process is as follows:
in this embodiment, the speech recognition model is used for executing a speech recognition step, and the training step of the speech recognition model includes:
S110: in general, a large number of pronunciation sub-languages exist in the process of a conversation, so that a voice data set of a daily conversation is collected first, and the sub-languages in the data set are labeled according to a predefined sub-language label, so that a labeled voice conversation data set is obtained. The designed secondary language label is shown in table 1, namely, text content corresponding to a voice segment with the following semantics in voice is marked as a corresponding secondary language label.
Table 1 side language tag table:
S120: as shown in fig. 4, the wenet model is constructed, and the input audio outputs PPG features through Conformer Encoder, wherein the PPG features are known as posterior phoneme probabilities, and are important features used in the subsequent voice conversion model, and the CTC Decoder converts the important features into corresponding texts. The text converted by the CTC Decoder is not involved in this embodiment.
S130: the wenet model is pre-trained using a large-scale chinese speech recognition dataset.
S140: the weight of the pre-trained wenet model is finely adjusted by using the labeled voice dialogue dataset, so that the voice recognition model has better effect on the auxiliary language recognition, and the trained voice recognition model is obtained. Finally, the model is used to extract PPG features on the training data set for optimization of the subsequent speech conversion model.
In this embodiment, the speech conversion model is used to perform the speech conversion step, and the speech conversion model is intended to convert the input audio to have the timbre of the target speaker, but at the same time preserve the speaking content of the input audio, and as shown in fig. 5, the speech conversion model mainly includes four sub-modules: the training optimization process of each sub-module and the whole module is described in detail below:
Content encoder:
The PPG feature of the input audio mainly retains the content information of the input audio, which is essential for maintaining the consistency of the converted content. The content encoder aims to further process the PPG features extracted from the speech recognition model for feeding into a subsequent decoder to generate converted audio. The entire content encoder consists of multiple layers of one-dimensional convolutions for fast reasoning,
Alternatively, higher order feature processing structures such as transformers may be used.
Intonation encoder:
The PPG feature of the input audio does not substantially contain style information such as intonation, whereas the pronunciation of the sub-language is more biased to be style information than that of the normal text, so that the fundamental frequency feature is extracted from the input audio and is indispensable as one of the intonation features of the conversion process. The intonation encoder is composed of multiple convolutional layers, processes fundamental frequency characteristic information extracted from input audio, and serves as one of the inputs to the decoder.
Timbre encoder:
In order to make the converted audio have the target timbre, the timbre encoder aims to obtain the integral prosody and timbre information from the reference audio of the target speaker, and a fixed one-dimensional vector is obtained from the reference audio through a plurality of convolution and pooling layers, and the vector is summed with the content information or spliced for later audio generation.
A decoder:
the decoder adopts a classical transducer structure, integrates the content, tone and tone information output by each encoder, and finally outputs converted audio with target tone.
Alternatively, the decoder may employ other networks, such as conformer, etc.
The training step of the speech conversion model also adopts a mode of pre-training and fine tuning, and comprises the following steps:
Pre-training a voice conversion model by using the audio of the Chinese voice recognition data set and PPG characteristics and fundamental frequency characteristics of the corresponding audio;
and fine tuning the weight of the pre-trained voice conversion model by using the audio of the target tone, the audio of the marked voice dialogue data set, and the PPG characteristic and the fundamental frequency characteristic of the corresponding audio so as to enable the auxiliary language conversion effect to be better. And obtaining a trained voice conversion model.
In one embodiment, after the optimized voice conversion model is obtained, the voices with the secondary languages in the dialogue data set are converted to obtain voices with target tone colors, and the audio data of the target tone color secondary language fields in the voices are extracted as a basic target speaker secondary language pronunciation unit by combining the positions of the secondary language labels.
In one embodiment, storing the secondary language pronunciation unit with the target tone corresponding to the secondary language label and different from the secondary language label acquired in the step S1 to obtain a secondary language database; and S2, searching the corresponding sub-language pronunciation unit with the target tone from a sub-language database according to the sub-language label.
In one embodiment, as shown in fig. 6, to synthesize the speech with the sub-language, S2 sequentially separates TTS text and the sub-language tag from the input text for the input language with the sub-language tag,
The TTS text is a language text that does not include a sub-language.
For TTS text, synthesizing corresponding voice by using a TTS model; for the secondary language tag, the corresponding secondary language voice pronunciation unit is searched from the secondary language database. For all the voice fragments, the volume of each audio frequency is required to be unified when the voice fragments are spliced in sequence, and the common pydub voice tool is adopted for volume balancing, so that the target speaker is basically realized to have the target of the auxiliary language pronunciation capability.
In this embodiment, the concatenation between the target tone sub-language pronunciation unit and TTS speech in S2 is inevitably excessively unnatural, and needs to be smoothed, but is difficult to be smoothed by a rule-based method, because the difference between the front and rear speech pronunciation conditions at the junction is relatively various and complex.
Therefore, the embodiment designs a voice smooth processing process in the auxiliary language synthesis subsystem, and can automatically process the inconsistency of the voice complexity before and after the splicing part. An autoregressive model of a pure decoder is trained using the audio of the target speaker, i.e., the target tone, as a training set, as shown in fig. 7, the model adopts a Transformer Decoder structure containing a total of 6 transformer block,Representing i frames of speech frames,And predicting the next voice frame by using the previous voice frame, calculating the loss by using the mean square error, and optimizing the model parameters. The model is trained by adopting the audio of the target speaker during training, TTS voice and spliced voice with auxiliary language are input during reasoning, and smooth audio is output, so that the problem of excessive unnatural splicing position can be effectively solved.
A second aspect of the embodiment of the present invention proposes a personified secondary language speech synthesis system according to the first aspect of the embodiment, comprising:
The auxiliary language unit extraction subsystem is used for labeling auxiliary language labels on the original tone voice data containing the auxiliary language, and acquiring an auxiliary language pronunciation unit with a target tone according to the labeled original tone voice data and the reference audio of the target tone;
the sub-language synthesis subsystem is used for receiving language input texts, wherein the language input texts comprise TTS texts and sub-language labels marked at corresponding positions in the TTS texts; and synthesizing the TTS text into target tone TTS voice, selecting a corresponding secondary language pronunciation unit with the target tone according to the secondary language label, and splicing the target tone TTS voice to generate audio with the target tone.
In one embodiment, the secondary language unit extraction subsystem includes:
The voice recognition module is used for labeling the auxiliary language tag on the original tone voice data containing the auxiliary language, and performing voice recognition on the labeled original tone voice data to extract PPG features and fundamental frequency features;
and the voice conversion module is used for carrying out content coding on the PPG characteristics, carrying out intonation coding on the fundamental frequency characteristics, carrying out tone coding on the reference audio of the target tone, and uniformly decoding the coding result to obtain the auxiliary language pronunciation unit with the target tone.
In this embodiment, the speech recognition module includes a speech recognition model, and the speech recognition model is trained according to the following steps:
Collecting a voice dialogue data set containing a secondary language, and labeling the secondary language in the voice dialogue data set with a secondary language label to obtain a labeled voice dialogue data set;
Constructing wenet a model, wenet a model comprising Conformer Encoder for receiving input audio and outputting PPG features of the input audio;
pre-training wenet models using a chinese speech recognition dataset;
And fine-tuning the weight of the pre-trained wenet model by using the labeled voice dialogue dataset to obtain a trained voice recognition model.
In this embodiment, the voice conversion module includes a voice conversion model, and the voice conversion model is trained according to the following steps:
pre-training a voice conversion model by using the reference audio of the target tone, the PPG characteristic and the fundamental frequency characteristic of the Chinese voice recognition dataset;
And fine tuning the weight of the pre-trained voice conversion model by using the reference audio of the target tone, the PPG characteristic and the fundamental frequency characteristic of the marked voice dialogue data set to obtain the trained voice conversion model.
In one embodiment, the method further includes a secondary language pronunciation unit interception module, configured to perform an interception operation on target tone audio data corresponding to original tone voice data content obtained by uniformly decoding an encoding result, where the method includes: and intercepting the audio data of the target tone sub-language field by combining the position of the sub-language tag, namely a sub-language pronunciation unit with the target tone.
In one embodiment, the method further comprises a secondary language database for storing secondary language pronunciation units with target timbres, which are different from the corresponding secondary language labels; the secondary language synthesis subsystem retrieves the corresponding secondary language pronunciation unit with the target tone from the secondary language database according to the secondary language label.
In one embodiment, the system further comprises a voice smoothing module, wherein a voice smoothing model is built in the voice smoothing module, the voice smoothing model comprises an autoregressive model trained by using the audio of the target tone as a training set, the autoregressive model comprises a decoder, and the next voice frame is predicted by using the previous voice frame;
The voice smoothing model is used for receiving input auxiliary language pronunciation units with target tone colors and performing voice smoothing on target tone color TTS voices and outputting audio with the target tone colors.
The second aspect of the embodiment of the present invention is applicable to all execution procedures of the anthropomorphic sub-language speech synthesis method set forth in the first aspect of the embodiment.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The above describes the personified auxiliary language speech synthesis method and system provided by the invention in detail, and specific examples are applied in this embodiment to illustrate the principle and implementation of the invention, and the above description of the embodiments is only used to help understand the method and core idea of the invention; meanwhile, as those skilled in the art will vary in the specific embodiments and application scope according to the idea of the present invention, the present disclosure should not be construed as limiting the present invention in summary.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined in this embodiment may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (12)
1. A personification auxiliary language voice synthesis method is characterized in that: the method comprises the following steps:
s1: labeling the auxiliary language label on the original tone voice data containing the auxiliary language, and acquiring an auxiliary language pronunciation unit with a target tone according to the labeled original tone voice data and the reference audio of the target tone; comprising the following steps:
s11: a voice recognition step: labeling the auxiliary language label on the original tone color voice data containing the auxiliary language, and performing voice recognition on the labeled original tone color voice data to extract PPG features and fundamental frequency features;
S12: a voice conversion step: performing content coding on the PPG characteristics, performing intonation coding on the fundamental frequency characteristics, performing tone coding on the reference audio of the target tone, and uniformly decoding the coding result to obtain a secondary language pronunciation unit with the target tone;
S2: receiving language input text, wherein the language input text comprises TTS text and a secondary language label marked at a corresponding position in the TTS text; and synthesizing the TTS text into target tone TTS voice, selecting a corresponding secondary language pronunciation unit with the target tone according to the secondary language label, and splicing the secondary language pronunciation unit with the target tone TTS voice to generate audio with the target tone.
2. The method of personified secondary language speech synthesis according to claim 1, further comprising a training step of a speech recognition model, wherein the speech recognition model is used to perform the speech recognition step:
collecting a voice dialogue data set containing a secondary language, and labeling the secondary language in the voice dialogue data set with a secondary language label to obtain a labeled voice dialogue data set;
constructing wenet a model, the wenet model comprising Conformer Encoder for receiving input audio and outputting PPG features of the input audio;
pre-training the wenet model using a chinese speech recognition dataset;
And fine-tuning the weight of the wenet model after pre-training by using the labeled voice dialogue data set to obtain a trained voice recognition model.
3. The method of personified bilingual speech synthesis of claim 1, further comprising a training step of a speech conversion model, wherein the speech conversion model is configured to perform the speech conversion step:
pre-training the speech conversion model using the audio of the chinese speech recognition dataset and PPG features and fundamental frequency features of the corresponding audio;
And fine tuning the weight of the pre-trained voice conversion model by using the audio of the target tone, the audio of the marked voice dialogue data set, and the PPG characteristic and the fundamental frequency characteristic of the corresponding audio to obtain the trained voice conversion model.
4. The method for synthesizing the sub-language speech according to claim 1, wherein S12 further comprises, after uniformly decoding the encoding result, obtaining target tone audio data corresponding to the original tone speech data content, and intercepting target tone sub-language field audio data, namely, a sub-language pronunciation unit with target tone, by combining the position of the sub-language tag.
5. The personified secondary language speech synthesis method according to claim 1, wherein the secondary language pronunciation unit with the target tone corresponding to the secondary language label acquired in the step S1 is stored to obtain a secondary language database; and S2, searching the corresponding secondary language pronunciation unit with the target tone from the secondary language database according to the secondary language label.
6. The method of synthesizing a sub-speech by personification according to claim 1, wherein the step of concatenating the sub-speech sound unit having the target timbre in S2 with the TTS speech having the target timbre comprises:
Constructing a voice smooth model, and training an autoregressive model by using the audio frequency of the target tone as a training set, wherein the autoregressive model comprises a decoder and predicts a next voice frame by using a previous voice frame;
And carrying out voice smoothing on the auxiliary language pronunciation unit with the target tone and the voice smoothing model of the target tone TTS voice input after training, and outputting the audio with the target tone.
7. A personified secondary language speech synthesis system according to any one of claims 1 to 6, comprising:
The auxiliary language unit extraction subsystem is used for labeling auxiliary language labels on the original tone voice data containing the auxiliary language, and acquiring an auxiliary language pronunciation unit with a target tone according to the labeled original tone voice data and the reference audio of the target tone; comprising the following steps: the voice recognition module is used for labeling the auxiliary language tag on the original tone voice data containing the auxiliary language, and performing voice recognition on the labeled original tone voice data to extract PPG features and fundamental frequency features;
The voice conversion module is used for carrying out content coding on the PPG characteristics, carrying out intonation coding on the fundamental frequency characteristics, carrying out tone coding on the reference audio of the target tone, and uniformly decoding the coding result to obtain a secondary language pronunciation unit with the target tone;
The system comprises a sub-language synthesis subsystem, a sub-language synthesis subsystem and a sub-language synthesis subsystem, wherein the sub-language synthesis subsystem is used for receiving language input texts, and the language input texts comprise TTS texts and sub-language labels marked at corresponding positions in the TTS texts; and synthesizing the TTS text into target tone TTS voice, selecting a corresponding secondary language pronunciation unit with the target tone according to the secondary language label, and splicing the secondary language pronunciation unit with the target tone TTS voice to generate audio with the target tone.
8. The personified secondary language speech synthesis system of claim 7, wherein the speech recognition module comprises a speech recognition model that is trained by:
collecting a voice dialogue data set containing a secondary language, and labeling the secondary language in the voice dialogue data set with a secondary language label to obtain a labeled voice dialogue data set;
constructing wenet a model, the wenet model comprising Conformer Encoder for receiving input audio and outputting PPG features of the input audio;
pre-training the wenet model using a chinese speech recognition dataset;
And fine-tuning the weight of the wenet model after pre-training by using the labeled voice dialogue data set to obtain a trained voice recognition model.
9. The personified secondary language speech synthesis system of claim 7, wherein the speech conversion module comprises a speech conversion model that is trained by:
Pre-training the speech conversion model using the reference audio of the chinese speech recognition dataset and PPG features and fundamental frequency features of the corresponding audio;
And fine tuning the weight of the pre-trained voice conversion model by using the audio of the target tone, the audio of the marked voice dialogue data set, and the PPG characteristic and the fundamental frequency characteristic of the corresponding audio to obtain the trained voice conversion model.
10. The personified secondary language speech synthesis system of claim 7, further comprising a secondary language pronunciation unit interception module for performing an interception operation on target tone audio data corresponding to the original tone speech data content obtained by uniformly decoding the encoding result, comprising: and intercepting the audio data of the target tone sub-language field by combining the position of the sub-language tag, namely a sub-language pronunciation unit with the target tone.
11. The personified secondary language speech synthesis system of claim 7, further comprising a secondary language database for storing secondary language pronunciation units having target timbres that are different from the secondary language labels; and the secondary language synthesis subsystem retrieves the corresponding secondary language pronunciation unit with the target tone from the secondary language database according to the secondary language label.
12. The personified secondary language speech synthesis system of claim 7, further comprising a speech smoothing module within which a speech smoothing model is built, the speech smoothing model comprising an autoregressive model trained using audio of a target timbre as a training set, the autoregressive model comprising a decoder to predict a next speech frame using a previous speech frame;
The voice smoothing mode is used for receiving input auxiliary language pronunciation units with target tone colors and performing voice smoothing on the target tone color TTS voice and outputting audio with the target tone colors.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410288143.0A CN117894294B (en) | 2024-03-14 | 2024-03-14 | Personification auxiliary language voice synthesis method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410288143.0A CN117894294B (en) | 2024-03-14 | 2024-03-14 | Personification auxiliary language voice synthesis method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117894294A CN117894294A (en) | 2024-04-16 |
CN117894294B true CN117894294B (en) | 2024-07-05 |
Family
ID=90642541
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410288143.0A Active CN117894294B (en) | 2024-03-14 | 2024-03-14 | Personification auxiliary language voice synthesis method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117894294B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113345431A (en) * | 2021-05-31 | 2021-09-03 | 平安科技(深圳)有限公司 | Cross-language voice conversion method, device, equipment and medium |
CN113808576A (en) * | 2020-06-16 | 2021-12-17 | 阿里巴巴集团控股有限公司 | Voice conversion method, device and computer system |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003271171A (en) * | 2002-03-14 | 2003-09-25 | Matsushita Electric Ind Co Ltd | Method, device and program for voice synthesis |
US7472065B2 (en) * | 2004-06-04 | 2008-12-30 | International Business Machines Corporation | Generating paralinguistic phenomena via markup in text-to-speech synthesis |
JP4478939B2 (en) * | 2004-09-30 | 2010-06-09 | 株式会社国際電気通信基礎技術研究所 | Audio processing apparatus and computer program therefor |
US8438032B2 (en) * | 2007-01-09 | 2013-05-07 | Nuance Communications, Inc. | System for tuning synthesized speech |
US8886537B2 (en) * | 2007-03-20 | 2014-11-11 | Nuance Communications, Inc. | Method and system for text-to-speech synthesis with personalized voice |
JP4213755B2 (en) * | 2007-03-28 | 2009-01-21 | 株式会社東芝 | Speech translation apparatus, method and program |
JP2009048003A (en) * | 2007-08-21 | 2009-03-05 | Toshiba Corp | Voice translation device and method |
US10319365B1 (en) * | 2016-06-27 | 2019-06-11 | Amazon Technologies, Inc. | Text-to-speech processing with emphasized output audio |
US20180133900A1 (en) * | 2016-11-15 | 2018-05-17 | JIBO, Inc. | Embodied dialog and embodied speech authoring tools for use with an expressive social robot |
CN110288973B (en) * | 2019-05-20 | 2024-03-29 | 平安科技(深圳)有限公司 | Speech synthesis method, device, equipment and computer readable storage medium |
US11373633B2 (en) * | 2019-09-27 | 2022-06-28 | Amazon Technologies, Inc. | Text-to-speech processing using input voice characteristic data |
JP7332024B2 (en) * | 2020-02-21 | 2023-08-23 | 日本電信電話株式会社 | Recognition device, learning device, method thereof, and program |
WO2021262238A1 (en) * | 2020-06-22 | 2021-12-30 | Sri International | Controllable, natural paralinguistics for text to speech synthesis |
CN114255738A (en) * | 2021-12-30 | 2022-03-29 | 北京有竹居网络技术有限公司 | Speech synthesis method, apparatus, medium, and electronic device |
CN115294963A (en) * | 2022-04-12 | 2022-11-04 | 阿里巴巴达摩院(杭州)科技有限公司 | Speech Synthesis Model Products |
CN114945110B (en) * | 2022-05-31 | 2023-10-24 | 深圳市优必选科技股份有限公司 | Method and device for synthesizing voice head video, terminal equipment and readable storage medium |
CN115547288A (en) * | 2022-09-19 | 2022-12-30 | 北京羽扇智信息科技有限公司 | Speech synthesis method, speech synthesis device, electronic equipment and storage medium |
CN116092478A (en) * | 2023-02-16 | 2023-05-09 | 平安科技(深圳)有限公司 | Voice emotion conversion method, device, equipment and storage medium |
CN116189652A (en) * | 2023-02-20 | 2023-05-30 | 北京有竹居网络技术有限公司 | Speech synthesis method and device, readable medium and electronic equipment |
CN116343747A (en) * | 2023-03-15 | 2023-06-27 | 平安科技(深圳)有限公司 | Speech synthesis method, speech synthesis device, electronic device, and storage medium |
-
2024
- 2024-03-14 CN CN202410288143.0A patent/CN117894294B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113808576A (en) * | 2020-06-16 | 2021-12-17 | 阿里巴巴集团控股有限公司 | Voice conversion method, device and computer system |
CN113345431A (en) * | 2021-05-31 | 2021-09-03 | 平安科技(深圳)有限公司 | Cross-language voice conversion method, device, equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN117894294A (en) | 2024-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3994683B1 (en) | Multilingual neural text-to-speech synthesis | |
JP2885372B2 (en) | Audio coding method | |
US20200226327A1 (en) | System and method for direct speech translation system | |
CN105845125B (en) | Phoneme synthesizing method and speech synthetic device | |
CN112037754B (en) | Method for generating speech synthesis training data and related equipment | |
JP4246790B2 (en) | Speech synthesizer | |
JP2002530703A (en) | Speech synthesis using concatenation of speech waveforms | |
CN109036371A (en) | Audio data generation method and system for speech synthesis | |
CN112102811B (en) | Optimization method and device for synthesized voice and electronic equipment | |
Luong et al. | Bootstrapping non-parallel voice conversion from speaker-adaptive text-to-speech | |
CN111710326A (en) | English voice synthesis method and system, electronic equipment and storage medium | |
CN102473416A (en) | Voice quality conversion device, method therefor, vowel information generating device, and voice quality conversion system | |
WO2012164835A1 (en) | Prosody generator, speech synthesizer, prosody generating method and prosody generating program | |
WO2021231050A1 (en) | Automatic audio content generation | |
CN110808028B (en) | Embedded voice synthesis method and device, controller and medium | |
CN116092471A (en) | A multi-style personalized Tibetan speech synthesis model for low-resource conditions | |
KR102072627B1 (en) | Speech synthesis apparatus and method thereof | |
Zhang et al. | AccentSpeech: Learning accent from crowd-sourced data for target speaker TTS with accents | |
CN117894294B (en) | Personification auxiliary language voice synthesis method and system | |
CN113851140A (en) | Voice conversion correlation method, system and device | |
CN113870833A (en) | Speech synthesis related system, method, device and equipment | |
CN115831088A (en) | Voice clone model generation method and device and electronic equipment | |
JP5706368B2 (en) | Speech conversion function learning device, speech conversion device, speech conversion function learning method, speech conversion method, and program | |
Cadic et al. | Towards Optimal TTS Corpora. | |
CN118298836B (en) | Tone color conversion method, device, electronic apparatus, storage medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |