Nothing Special   »   [go: up one dir, main page]

CN111462727A - Method, apparatus, electronic device and computer readable medium for generating speech - Google Patents

Method, apparatus, electronic device and computer readable medium for generating speech Download PDF

Info

Publication number
CN111462727A
CN111462727A CN202010242977.XA CN202010242977A CN111462727A CN 111462727 A CN111462727 A CN 111462727A CN 202010242977 A CN202010242977 A CN 202010242977A CN 111462727 A CN111462727 A CN 111462727A
Authority
CN
China
Prior art keywords
trained
text feature
feature vector
acoustic features
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010242977.XA
Other languages
Chinese (zh)
Inventor
汤本来
顾宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010242977.XA priority Critical patent/CN111462727A/en
Publication of CN111462727A publication Critical patent/CN111462727A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • G10L13/047Architecture of speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/12Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being prediction coefficients
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Machine Translation (AREA)

Abstract

Embodiments of the present disclosure disclose methods, apparatuses, electronic devices and computer-readable media for generating speech. One embodiment of the method comprises: extracting a text feature vector in user voice; obtaining acoustic features of the target speaker according to the text feature vector; based on the acoustic features, speech in the target language is generated. The embodiment realizes customized voice generation and user experience improvement.

Description

Method, apparatus, electronic device and computer readable medium for generating speech
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a method, an apparatus, an electronic device, and a computer-readable medium for generating speech.
Background
The research on the speech generation technology is already an important part in the whole speech language research, and the early research results in the aspects are both available at home and abroad, but the early work is mostly still in the laboratory stage due to various reasons such as computational complexity, memory capacity and computational instantaneity. In many aspects, however, speech generation techniques have a wide range of applications.
In the related art, the generated voices are often the same sound.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose a method, apparatus, electronic device and computer readable medium for generating speech to solve the technical problems mentioned in the background section above.
In a first aspect, some embodiments of the present disclosure provide a method for generating speech, the method comprising: extracting a text feature vector in user voice; obtaining the acoustic characteristics of the target speaker according to the text characteristic vector; based on the acoustic features, speech in the target language is generated.
In a second aspect, some embodiments of the present disclosure provide an apparatus for generating speech, the apparatus comprising: the extraction unit is configured to extract text feature vectors in user voice; the first generating unit is configured to obtain the acoustic features of the target speaker according to the text feature vector; and the second generating unit is used for generating the voice of the target language based on the acoustic characteristics.
In a third aspect, an embodiment of the present application provides an electronic device, where the network device includes: one or more processors; storage means for storing one or more programs; when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method as described in any implementation of the first aspect.
In a fourth aspect, the present application provides a computer-readable medium, on which a computer program is stored, which, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: the content of the user voice is determined by extracting the text feature vector in the user voice, then the acoustic feature of the target speaker is obtained according to the text feature vector, and finally the voice of the target language is generated based on the acoustic feature. Therefore, the acoustic characteristics of the target speaker are effectively utilized, and customized voice generation and user experience improvement are realized.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
FIG. 1 is a schematic illustration of one application scenario of a method for generating speech according to some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a method for generating speech according to the present disclosure;
FIG. 3 is a flow diagram of further embodiments of a method for generating speech according to the present disclosure;
FIG. 4 is a schematic block diagram of some embodiments of a speech generating apparatus according to the present disclosure;
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 is a schematic diagram of one application scenario of a method for generating speech according to some embodiments of the present disclosure.
As shown in fig. 1, first, the server 101 extracts a text feature vector 103 from the user speech 102. The server 101 may then generate acoustic features 104 using the text feature vectors 103. The server 101 may then generate speech 105 in the target language using the acoustic features 104.
It is understood that the method for generating voice may be executed by the server 101 or may be executed by the terminal device, and the execution main body of the method may further include a device formed by integrating the server 101 and the terminal device through a network, or may also be executed by various software programs. The terminal device may be various electronic devices with information processing capability, including but not limited to a smart phone, a tablet computer, an e-book reader, a laptop portable computer, a desktop computer, and the like. When the execution subject is software, the software can be installed in the electronic device listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of servers in fig. 1 is merely illustrative. There may be any number of servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of some embodiments of a method for generating speech according to the present disclosure is shown. The method for generating speech comprises the following steps:
step 201, extracting a text feature vector in the user voice.
In some embodiments, the execution subject of the method for generating speech (e.g., the server shown in FIG. 1) may extract the text feature vectors in a variety of ways. As another example, the execution main body may further store a plurality of correspondence relationships between the user voices and the text feature vectors corresponding to the user voices in advance, and when extracting the text feature vectors, determine the same or similar user voices in the pre-stored user voices and extract the corresponding text feature vectors. Here, the text feature vector generally refers to a pinyin sequence or a phoneme sequence corresponding to the user speech content. As an example, when the text converted by the user speech is "hello", the text feature vector may be a pinyin sequence "nihao"; when the text converted from the user speech is "hello", the text feature vector may be a phoneme sequence "hello". Here, the user voice may be obtained from the outside through a wireless or wired connection, or may be stored locally in advance. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
The acoustic features generally refer to features extracted from speech, and the acoustic features may be Mel Frequency Cepstrum Coefficient (MFCC), linear predictive coding (L PC), or the like, as examples.
And then, inputting the acoustic features into a pre-trained extraction model to obtain a text feature vector. Here, the above extraction model is generally used to characterize the correspondence between the acoustic features and the text feature vectors. As an example, the above extraction model may be a correspondence table including acoustic features and text feature vectors. The correspondence table may be a correspondence table that is prepared in advance by a technician based on statistics of a large number of acoustic features and text feature vectors and stores correspondence between a plurality of acoustic features and text feature vectors.
And then, sequentially comparing the acoustic features with a plurality of acoustic features in the corresponding relation table, and if one of the acoustic features in the corresponding relation table is the same as or similar to the extracted acoustic feature, taking a text feature vector corresponding to the acoustic feature in the corresponding relation table as a text feature vector corresponding to the extracted acoustic feature.
In some optional implementations of some embodiments, the extraction model is trained according to the following training steps: the method comprises the steps of obtaining a first training sample, wherein the first training sample comprises sample acoustic features and sample text feature vectors corresponding to the sample acoustic features. And then, inputting the acoustic characteristics of the samples into an extraction model to be trained to obtain a text characteristic vector. As an example, the extraction model to be trained may be a correspondence table including acoustic features and text feature vectors, the acoustic features of the sample are sequentially compared with a plurality of acoustic features in the correspondence table, and if a certain acoustic feature in the correspondence table is the same as or similar to the acoustic features of the sample, the text feature vector corresponding to the acoustic feature in the correspondence table is used as the text feature vector corresponding to the acoustic feature of the sample.
And analyzing the sample text characteristic vector and the text characteristic vector to obtain an analysis result. And determining the extraction model to be trained as the extraction model in response to the fact that the analysis result meets the preset condition. Specifically, the execution agent may calculate the sample text feature vector and the loss value of the text feature vector according to a preset loss function. The preset condition may be that the loss value reaches a predetermined threshold value.
In some optional implementation manners of some embodiments, the executing entity may determine that the extraction model to be trained is not trained in response to determining that the analysis result does not satisfy a preset condition, and adjust a relevant parameter of the extraction model to be trained.
Step 202, obtaining the acoustic characteristics of the target speaker according to the text characteristic vector.
As another example, the execution subject may store a plurality of voices of the target speaker in advance, and when obtaining the text feature vector, the execution subject may synthesize the plurality of voices into voices corresponding to the text feature vector in a splicing and cutting manner and perform feature extraction to obtain the acoustic features of the target speaker.
In some alternative implementations of some embodiments, the executing entity may input the text feature vector to a conversion model to obtain the acoustic features of the target speaker. Here, the above-mentioned conversion model is generally used to characterize the correspondence of the text feature vectors to the acoustic features of the target speaker.
As an example, the above-mentioned conversion model may be a correspondence table of text feature vectors and acoustic features of the target speaker. Here, the correspondence table may be a correspondence table in which correspondence between a plurality of text feature vectors and acoustic features of the target speaker is stored, which is prepared in advance by a technician based on statistics of a large number of text feature vectors and acoustic features of the target speaker.
And then, sequentially comparing the text feature vector with a plurality of text feature vectors in a corresponding relation table, and if one text feature vector in the corresponding relation table is the same as or similar to the compared text feature vector, taking the acoustic feature of the target speaker corresponding to the text feature vector in the corresponding relation table as the acoustic feature of the target speaker corresponding to the compared text feature vector.
In some optional implementations of some embodiments, the transformation model is trained according to the following training steps: and acquiring a second training sample, wherein the second training sample comprises a sample text feature vector and acoustic features corresponding to the sample text feature vector. And inputting the sample text feature vector to a conversion model to be trained to obtain acoustic features.
As an example, the above-mentioned conversion model may be a correspondence table of text feature vectors and acoustic features of the target speaker. And then, sequentially comparing the sample text feature vector with a plurality of text feature vectors in the corresponding relation table, and if one text feature vector in the corresponding relation table is the same as or similar to the sample text feature vector, taking the acoustic feature of the target speaker corresponding to the text feature vector in the corresponding relation table as the acoustic feature of the target speaker corresponding to the sample text feature vector.
And analyzing the acoustic characteristics and the acoustic characteristics of the sample to obtain an analysis result. And in response to the fact that the analysis result meets the preset condition, determining that the conversion model to be trained is trained, and determining the conversion model to be trained as the conversion model. Specifically, the analysis result may be that the execution subject determines the acoustic characteristics of the sample and the loss values of the acoustic characteristics according to a preset loss function. The predetermined condition is generally that the loss value reaches a predetermined threshold value.
In some optional implementation manners of some embodiments, in response to determining that the analysis result does not satisfy a preset condition, determining that the conversion model to be trained is not trained, and adjusting relevant parameters of the conversion model to be trained.
Step 203, generating the voice of the target language based on the acoustic characteristics.
In some embodiments, the execution subject may generate the language of the target language based on the acoustic features. Specifically, the execution body may generate the speech of the target language by using methods such as waveform concatenation and parameter synthesis. Here, the target language generally refers to a language of a speech that a user wants to generate. By way of example, the target language is typically the same as the language of the user's speech.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: the content of the user voice is determined by extracting the text feature vector in the user voice, then the acoustic feature of the target speaker is obtained according to the text feature vector, and finally the voice of the target language is generated based on the acoustic feature. Therefore, the acoustic characteristics of the target speaker are effectively utilized, and customized voice generation and user experience improvement are realized.
With further reference to FIG. 3, a flow 300 of further embodiments of a method for generating speech is illustrated. The flow 300 of the method for generating speech includes the steps of:
step 301, extracting a text feature vector in the user voice.
Step 302, obtaining the acoustic characteristics of the target speaker according to the text characteristic vector.
Step 303, converting the acoustic features into a voice of a target language.
In some embodiments, the execution subject may convert the acoustic features into speech in a target language. As an example, the execution body may convert the acoustic feature into a voice of a target language using a vocoder, and here, the vocoder (vocoder) generally refers to a voice analysis and synthesis system of a voice signal.
As can be seen from fig. 3, compared to the description of some embodiments corresponding to fig. 2, the flow 300 of the method for generating speech in some embodiments corresponding to fig. 4 embodies the step of converting acoustic features into speech in a target language. Therefore, the scheme described by the embodiment can enable the voice of the target language to be clearer and improve the sound quality.
With further reference to fig. 4, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a device for generating speech, which correspond to those of the method embodiments illustrated in fig. 2, and which may be applied in particular in various electronic devices.
As shown in fig. 4, the speech generating apparatus 400 of some embodiments includes: an extraction unit 401, a first generation unit 402, and a second generation unit 403. Wherein the extracting unit 401 is configured to extract a text feature vector in the user speech; the first generating unit 402 is configured to obtain acoustic features of the target speaker according to the text feature vector; the second generating unit 403 is configured to generate speech of the target language based on the above-mentioned acoustic features.
In an optional implementation of some embodiments, the extraction unit 401 is further configured to: extracting acoustic features in the user voice; and inputting the acoustic features into a pre-trained extraction model to obtain a text feature vector.
In an optional implementation of some embodiments, the first generating unit 402 is further configured to: and inputting the text feature vector into a conversion model to obtain the acoustic features of the target speaker.
In an optional implementation of some embodiments, the second generating unit 403 is further configured to: and converting the acoustic features into the voice of the target language.
In an alternative implementation of some embodiments, the above extraction model is trained according to the following training steps: acquiring a first training sample, wherein the first training sample comprises sample acoustic features and a sample text feature vector corresponding to the sample acoustic features; inputting the acoustic characteristics of the sample into an extraction model to be trained to obtain a text characteristic vector; analyzing the sample text characteristic vector and the text characteristic vector to obtain an analysis result; and determining the extraction model to be trained as the extraction model in response to the fact that the analysis result meets the preset condition.
In an optional implementation manner of some embodiments, the apparatus 400 for generating speech further includes a first adjusting unit configured to: and in response to the fact that the analysis result does not meet the preset condition, determining that the extraction model to be trained is not trained, and adjusting relevant parameters of the extraction model to be trained.
In an alternative implementation of some embodiments, the transformation model is trained according to the following training steps: acquiring a second training sample, wherein the second training sample comprises a sample text feature vector and acoustic features corresponding to the sample text feature vector; inputting the sample text feature vector to a conversion model to be trained to obtain acoustic features; analyzing the acoustic characteristics and the acoustic characteristics of the sample to obtain an analysis result; and in response to the fact that the analysis result meets the preset condition, determining that the conversion model to be trained is trained, and determining the conversion model to be trained as the conversion model.
In an optional implementation manner of some embodiments, the apparatus 400 for generating speech further includes a second adjusting unit configured to: and in response to the fact that the analysis result does not meet the preset condition, determining that the conversion model to be trained is not trained, and adjusting relevant parameters of the conversion model to be trained.
It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
One of the above-described various embodiments of the present disclosure has the following advantageous effects: the content of the user voice is determined by extracting the text feature vector in the user voice, then the acoustic feature of the target speaker is obtained according to the text feature vector, and finally the voice of the target language is generated based on the acoustic feature. Therefore, the acoustic characteristics of the target speaker are effectively utilized, and customized voice generation and user experience improvement are realized.
Referring now to fig. 5, a schematic diagram of an electronic device (e.g., the server of fig. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc., output devices 507 including, for example, a liquid crystal display (L CD), speaker, vibrator, etc., and communication devices 509, the communication devices 509 may allow the electronic device 500 to communicate wirelessly or wiredly with other devices to exchange data although FIG. 5 illustrates the electronic device 500 as having various devices, it is to be understood that not all of the illustrated devices are required to be implemented or provided.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). examples of communications networks include local area networks ("L AN"), wide area networks ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: extracting a text feature vector in user voice; obtaining the acoustic characteristics of the target speaker according to the text characteristic vector; based on the acoustic features, speech in the target language is generated.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including AN object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language, or similar programming languages.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes an extraction unit, a first generation unit, and a second generation unit. Where the names of these units do not in some cases constitute a limitation of the unit itself, for example, an extraction unit may also be described as a "unit that extracts text feature vectors in the user's speech".
For example, without limitation, exemplary types of hardware logic that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex programmable logic devices (CP L D), and so forth.
In accordance with one or more embodiments of the present disclosure, there is provided a method for generating speech, including: extracting a text feature vector in user voice; obtaining the acoustic characteristics of the target speaker according to the text characteristic vector; based on the acoustic features, speech in the target language is generated.
According to one or more embodiments of the present disclosure, the extracting text feature vectors in the user speech includes: extracting acoustic features in the user voice; and inputting the acoustic features into a pre-trained extraction model to obtain a text feature vector.
According to one or more embodiments of the present disclosure, the obtaining the acoustic feature of the target speaker according to the text feature vector includes: and inputting the text feature vector into a conversion model to obtain the acoustic features of the target speaker.
According to one or more embodiments of the present disclosure, the generating of the speech of the target language based on the acoustic features includes: and converting the acoustic features into the voice of the target language.
According to one or more embodiments of the present disclosure, the extraction model is obtained by training according to the following training steps: acquiring a first training sample, wherein the first training sample comprises sample acoustic features and a sample text feature vector corresponding to the sample acoustic features; inputting the acoustic characteristics of the sample into an extraction model to be trained to obtain a text characteristic vector; analyzing the sample text characteristic vector and the text characteristic vector to obtain an analysis result; and determining the extraction model to be trained as the extraction model in response to the fact that the analysis result meets the preset condition.
According to one or more embodiments of the present disclosure, the method further includes: and in response to the fact that the analysis result does not meet the preset condition, determining that the extraction model to be trained is not trained, and adjusting relevant parameters of the extraction model to be trained.
According to one or more embodiments of the present disclosure, the transformation model is trained according to the following training steps: acquiring a second training sample, wherein the second training sample comprises a sample text feature vector and acoustic features corresponding to the sample text feature vector; inputting the sample text feature vector to a conversion model to be trained to obtain acoustic features; analyzing the acoustic characteristics and the acoustic characteristics of the sample to obtain an analysis result; and in response to the fact that the analysis result meets the preset condition, determining that the conversion model to be trained is trained, and determining the conversion model to be trained as the conversion model.
According to one or more embodiments of the present disclosure, the method further includes: and in response to the fact that the analysis result does not meet the preset condition, determining that the conversion model to be trained is not trained, and adjusting relevant parameters of the conversion model to be trained.
According to one or more embodiments of the present disclosure, there is provided an apparatus for generating a speech, including: an extraction unit configured to extract a text feature vector in a user voice; a first generating unit configured to obtain acoustic features of the target speaker according to the text feature vector; and a second generating unit configured to generate a speech of the target language based on the acoustic feature.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the method as in any one of the above.
According to one or more embodiments of the present disclosure, there is provided a computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements a method as any one of the above.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (11)

1. A method for generating speech, comprising:
extracting a text feature vector in user voice;
obtaining acoustic features of the target speaker according to the text feature vector;
based on the acoustic features, speech in a target language is generated.
2. The method of claim 1, wherein the extracting text feature vectors in user speech comprises:
extracting acoustic features in the user voice;
and inputting the acoustic features into a pre-trained extraction model to obtain a text feature vector.
3. The method of claim 1, wherein said deriving acoustic features of the target speaker from the text feature vector comprises:
and inputting the text feature vector into a conversion model to obtain the acoustic features of the target speaker.
4. The method of claim 1, wherein the generating speech in a target language based on the acoustic features comprises:
converting the acoustic features into speech in a target language.
5. The method of claim 2, wherein the extraction model is trained according to the following training steps:
acquiring a first training sample, wherein the first training sample comprises sample acoustic features and a sample text feature vector corresponding to the sample acoustic features;
inputting the acoustic characteristics of the sample into an extraction model to be trained to obtain a text characteristic vector;
analyzing the sample text characteristic vector and the text characteristic vector to obtain an analysis result;
and determining the extraction model to be trained as the extraction model in response to the determination that the analysis result meets the preset condition.
6. The method of claim 5, wherein the method further comprises:
and in response to the fact that the analysis result does not meet the preset condition, determining that the extraction model to be trained is not trained, and adjusting relevant parameters of the extraction model to be trained.
7. The method of claim 3, wherein the transformation model is trained according to the following training steps:
acquiring a second training sample, wherein the second training sample comprises a sample text feature vector and acoustic features corresponding to the sample text feature vector;
inputting the sample text feature vector to a conversion model to be trained to obtain acoustic features;
analyzing the acoustic characteristics and the acoustic characteristics of the sample to obtain an analysis result;
and in response to the fact that the analysis result meets the preset condition, determining that the conversion model to be trained is trained, and determining the conversion model to be trained as the conversion model.
8. The method of claim 7, wherein the method further comprises:
and in response to the fact that the analysis result does not meet the preset condition, determining that the conversion model to be trained is not trained, and adjusting relevant parameters of the conversion model to be trained.
9. An apparatus for generating speech, comprising:
an extraction unit configured to extract a text feature vector in a user voice;
a first generating unit configured to obtain acoustic features of the target speaker according to the text feature vector;
a second generating unit configured to generate speech of a target language based on the acoustic feature.
10. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-8.
11. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-8.
CN202010242977.XA 2020-03-31 2020-03-31 Method, apparatus, electronic device and computer readable medium for generating speech Pending CN111462727A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010242977.XA CN111462727A (en) 2020-03-31 2020-03-31 Method, apparatus, electronic device and computer readable medium for generating speech

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010242977.XA CN111462727A (en) 2020-03-31 2020-03-31 Method, apparatus, electronic device and computer readable medium for generating speech

Publications (1)

Publication Number Publication Date
CN111462727A true CN111462727A (en) 2020-07-28

Family

ID=71682396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010242977.XA Pending CN111462727A (en) 2020-03-31 2020-03-31 Method, apparatus, electronic device and computer readable medium for generating speech

Country Status (1)

Country Link
CN (1) CN111462727A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112151005A (en) * 2020-09-28 2020-12-29 四川长虹电器股份有限公司 Chinese and English mixed speech synthesis method and device
CN112164387A (en) * 2020-09-22 2021-01-01 腾讯音乐娱乐科技(深圳)有限公司 Audio synthesis method and device, electronic equipment and computer-readable storage medium
CN112349273A (en) * 2020-11-05 2021-02-09 携程计算机技术(上海)有限公司 Speech synthesis method based on speaker, model training method and related equipment
CN112382267A (en) * 2020-11-13 2021-02-19 北京有竹居网络技术有限公司 Method, apparatus, device and storage medium for converting accents
WO2022037388A1 (en) * 2020-08-17 2022-02-24 北京字节跳动网络技术有限公司 Voice generation method and apparatus, device, and computer readable medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007148039A (en) * 2005-11-28 2007-06-14 Matsushita Electric Ind Co Ltd Speech translation device and speech translation method
CN105118498A (en) * 2015-09-06 2015-12-02 百度在线网络技术(北京)有限公司 Training method and apparatus of speech synthesis model
CN107705783A (en) * 2017-11-27 2018-02-16 北京搜狗科技发展有限公司 A kind of phoneme synthesizing method and device
CN107767879A (en) * 2017-10-25 2018-03-06 北京奇虎科技有限公司 Audio conversion method and device based on tone color
CN107945786A (en) * 2017-11-27 2018-04-20 北京百度网讯科技有限公司 Phoneme synthesizing method and device
CN109887511A (en) * 2019-04-24 2019-06-14 武汉水象电子科技有限公司 A kind of voice wake-up optimization method based on cascade DNN
CN110223705A (en) * 2019-06-12 2019-09-10 腾讯科技(深圳)有限公司 Phonetics transfer method, device, equipment and readable storage medium storing program for executing
CN110767210A (en) * 2019-10-30 2020-02-07 四川长虹电器股份有限公司 Method and device for generating personalized voice
CN110808034A (en) * 2019-10-31 2020-02-18 北京大米科技有限公司 Voice conversion method, device, storage medium and electronic equipment
CN111462728A (en) * 2020-03-31 2020-07-28 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer readable medium for generating speech
CN112116904A (en) * 2020-11-20 2020-12-22 北京声智科技有限公司 Voice conversion method, device, equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007148039A (en) * 2005-11-28 2007-06-14 Matsushita Electric Ind Co Ltd Speech translation device and speech translation method
CN105118498A (en) * 2015-09-06 2015-12-02 百度在线网络技术(北京)有限公司 Training method and apparatus of speech synthesis model
CN107767879A (en) * 2017-10-25 2018-03-06 北京奇虎科技有限公司 Audio conversion method and device based on tone color
CN107705783A (en) * 2017-11-27 2018-02-16 北京搜狗科技发展有限公司 A kind of phoneme synthesizing method and device
CN107945786A (en) * 2017-11-27 2018-04-20 北京百度网讯科技有限公司 Phoneme synthesizing method and device
CN109887511A (en) * 2019-04-24 2019-06-14 武汉水象电子科技有限公司 A kind of voice wake-up optimization method based on cascade DNN
CN110223705A (en) * 2019-06-12 2019-09-10 腾讯科技(深圳)有限公司 Phonetics transfer method, device, equipment and readable storage medium storing program for executing
CN110767210A (en) * 2019-10-30 2020-02-07 四川长虹电器股份有限公司 Method and device for generating personalized voice
CN110808034A (en) * 2019-10-31 2020-02-18 北京大米科技有限公司 Voice conversion method, device, storage medium and electronic equipment
CN111462728A (en) * 2020-03-31 2020-07-28 北京字节跳动网络技术有限公司 Method, apparatus, electronic device and computer readable medium for generating speech
CN112116904A (en) * 2020-11-20 2020-12-22 北京声智科技有限公司 Voice conversion method, device, equipment and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022037388A1 (en) * 2020-08-17 2022-02-24 北京字节跳动网络技术有限公司 Voice generation method and apparatus, device, and computer readable medium
CN112164387A (en) * 2020-09-22 2021-01-01 腾讯音乐娱乐科技(深圳)有限公司 Audio synthesis method and device, electronic equipment and computer-readable storage medium
CN112151005A (en) * 2020-09-28 2020-12-29 四川长虹电器股份有限公司 Chinese and English mixed speech synthesis method and device
CN112151005B (en) * 2020-09-28 2022-08-19 四川长虹电器股份有限公司 Chinese and English mixed speech synthesis method and device
CN112349273A (en) * 2020-11-05 2021-02-09 携程计算机技术(上海)有限公司 Speech synthesis method based on speaker, model training method and related equipment
CN112349273B (en) * 2020-11-05 2024-05-31 携程计算机技术(上海)有限公司 Speech synthesis method based on speaker, model training method and related equipment
CN112382267A (en) * 2020-11-13 2021-02-19 北京有竹居网络技术有限公司 Method, apparatus, device and storage medium for converting accents

Similar Documents

Publication Publication Date Title
CN108630190B (en) Method and apparatus for generating speech synthesis model
CN111462728A (en) Method, apparatus, electronic device and computer readable medium for generating speech
CN107945786B (en) Speech synthesis method and device
CN111462727A (en) Method, apparatus, electronic device and computer readable medium for generating speech
CN112786006B (en) Speech synthesis method, synthesis model training method, device, medium and equipment
JP7208952B2 (en) Method and apparatus for generating interaction models
CN111369971B (en) Speech synthesis method, device, storage medium and electronic equipment
CN109545192B (en) Method and apparatus for generating a model
CN112489621B (en) Speech synthesis method, device, readable medium and electronic equipment
CN111599343B (en) Method, apparatus, device and medium for generating audio
WO2022156464A1 (en) Speech synthesis method and apparatus, readable medium, and electronic device
CN109545193B (en) Method and apparatus for generating a model
CN112786007A (en) Speech synthesis method, device, readable medium and electronic equipment
CN105489221A (en) Voice recognition method and device
CN107705782B (en) Method and device for determining phoneme pronunciation duration
CN110534085B (en) Method and apparatus for generating information
CN111354345B (en) Method, apparatus, device and medium for generating speech model and speech recognition
CN111681661B (en) Speech recognition method, apparatus, electronic device and computer readable medium
CN111597825B (en) Voice translation method and device, readable medium and electronic equipment
CN111798821A (en) Sound conversion method, device, readable storage medium and electronic equipment
CN111785247A (en) Voice generation method, device, equipment and computer readable medium
JP2023541879A (en) Speech recognition using data analysis and dilation of speech content from isolated audio inputs
CN111968657B (en) Voice processing method and device, electronic equipment and computer readable medium
CN112017685A (en) Voice generation method, device, equipment and computer readable medium
CN111862933A (en) Method, apparatus, device and medium for generating synthesized speech

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200728

RJ01 Rejection of invention patent application after publication