Nothing Special   »   [go: up one dir, main page]

CN107358959B - Coding method and coder for multi-channel signal - Google Patents

Coding method and coder for multi-channel signal Download PDF

Info

Publication number
CN107358959B
CN107358959B CN201610303992.4A CN201610303992A CN107358959B CN 107358959 B CN107358959 B CN 107358959B CN 201610303992 A CN201610303992 A CN 201610303992A CN 107358959 B CN107358959 B CN 107358959B
Authority
CN
China
Prior art keywords
current frame
frame
itd
itd parameter
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610303992.4A
Other languages
Chinese (zh)
Other versions
CN107358959A (en
Inventor
张兴涛
刘泽新
苗磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201610303992.4A priority Critical patent/CN107358959B/en
Priority to PCT/CN2016/103596 priority patent/WO2017193551A1/en
Publication of CN107358959A publication Critical patent/CN107358959A/en
Application granted granted Critical
Publication of CN107358959B publication Critical patent/CN107358959B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The embodiment of the invention provides a coding method and a coder of a multi-channel signal, wherein the method comprises the following steps: acquiring a current frame containing a multi-channel signal; determining feature information according to the multi-channel signal, wherein the feature information comprises at least one of a frame type and a signal type of a current frame, the frame type comprises a speech frame and/or a non-speech frame, and the signal type comprises an unvoiced sound and/or a voiced sound; determining an ITD parameter of the current frame according to the characteristic information; the ITD parameters are encoded. The embodiment of the invention can improve the accuracy of ITD parameter extraction.

Description

Coding method and coder for multi-channel signal
Technical Field
Embodiments of the present invention relate to the field of audio coding and decoding, and more particularly, to a method and an encoder for encoding a multi-channel signal.
Background
With the improvement of quality of life, people's demand for high-quality audio is increasing. Compared with single-channel audio, stereo audio has the sense of direction and distribution of each sound source, and can improve the definition, intelligibility and presence of sound, thereby being popular among people.
The Stereo processing techniques mainly include sum/difference (MS) coding, Intensity Stereo (IS) coding, and Parametric Stereo (PS) coding.
MS coding carries out sum and difference transformation on two paths of signals based on correlation between channels, and energy of each channel is mainly concentrated in a sum channel, so that redundancy between the channels is removed. In the MS coding technique, the code rate saving depends on the correlation of the input signal, and when the correlation of the left and right channel signals is poor, the left channel signal and the right channel signal need to be transmitted separately. The IS coding IS based on the characteristic that the human auditory system IS insensitive to the fine result of the phase difference of the high frequency components (for example, components greater than 2 kHz) of the vocal tract, and the high frequency components of the left and right signals are simplified. However, the IS coding technique IS only effective for high frequency components, and if the IS coding process IS extended to low frequency, serious artifacts will be caused. PS coding is based on a binaural auditory model, converting stereo to mono at the encoding end and a small description of the spatial sound fieldSpatial parameters (or spatial perceptual parameters), as shown in FIG. 1 (x in FIG. 1)LFor left channel time domain signals, xRA right channel time domain signal). The decoding end obtains the mono signal and then restores the stereo sound by further combining the spatial parameters, as shown in fig. 2. Compared with MS coding, PS coding compression ratio is high, higher coding gain can be obtained on the premise of keeping better tone quality, and the stereo audio signal decoding method can work in full audio bandwidth and can well restore stereo spatial perception effect.
In PS coding, spatial parameters include Inter-channel correlation (IC), Inter-channel Level Difference (ILD), Inter-channel Time Difference (ITD), and Inter-channel Phase Difference (IPD). IC describes the inter-channel cross-correlation or coherence, which determines the perception of the sound field range, which can improve the audio signal spatial perception and sound stability. ILD is used to resolve the horizontal direction angle of stereo sources, describing the inter-channel intensity differences, which parameter will affect the frequency content of the entire spectrum. ITD and IPD are spatial parameters representing the horizontal orientation of the sound source, describing the time and phase differences between the channels, which mainly affect the frequency components below 2 kHz. ILD, ITD and IPD can decide the perception of human ears to the sound source position, can effectively confirm the sound field position, have important effect to the recovery of stereophonic signal.
In a specific audio encoding process, stereo sound may be encoded in units of frames. When encoding the current frame, ITD parameters corresponding to the current frame may be extracted based on a multi-channel signal in the current frame. The ITD parameters of the current frame may be extracted based on the time domain signal, or may be extracted based on the frequency domain signal. However, no matter which way the ITD parameters of the current frame are extracted, the ITD parameter extraction ways of all frames are consistent in the whole encoding process, and such ITD parameter extraction ways are not flexible enough.
Disclosure of Invention
The application provides an encoding method and an encoder of a multi-channel signal, which aim to improve the flexibility of an ITD parameter extraction mode.
In a first aspect, a method for encoding a multi-channel signal is provided, including: acquiring a current frame containing a multi-channel signal; determining feature information according to the multi-channel signal, wherein the feature information comprises at least one of a frame type and a signal type of the current frame, the frame type comprises a speech frame and/or a non-speech frame, and the signal type comprises an unvoiced sound and/or a voiced sound; determining the ITD parameter of the current frame according to the characteristic information; encoding the ITD parameters. The ITD parameters of the current frame may represent ITD parameters of a multi-channel signal in the current frame.
According to the scheme, the ITD parameters of the current frame are determined according to the characteristic information, instead of the prior art that the type or the characteristics of the multi-channel signal of the current frame are not considered, the ITD parameters are extracted in a fixed mode, and therefore the flexibility of extracting the ITD parameters can be improved.
With reference to the first aspect, in a first implementation manner of the first aspect, the determining feature information according to the multi-channel signal includes: determining the frame type of the current frame according to the multi-channel signal; the determining the ITD parameter of the current frame according to the characteristic information comprises: under the condition that the current frame is a non-voice frame, determining an ITD parameter of the current frame by adopting a first ITD parameter extraction mode; and under the condition that the current frame is a voice frame, determining the ITD parameter of the current frame by adopting a second ITD parameter extraction mode.
According to the scheme, different ITD parameter extraction modes are adopted according to different types of the current frame, so that the flexibility of the ITD parameter extraction modes is improved.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the determining the ITD parameter of the current frame by using the first ITD parameter extraction manner includes: determining the ITD parameter of the previous frame or the previous subframe of the current frame as the ITD parameter of the current frame.
Specifically, the multi-channel signal may be processed in units of frames, typically 20ms per frame, and furthermore, the frames may be further divided into subframes for processing, for example, when one frame of 20ms is divided into 2 subframes, each subframe is 10 ms; when one frame of 20ms is divided into 4 subframes, each subframe is 5 ms. The previous frame of the current frame may refer to a frame immediately previous to the current frame, i.e., an audio sample included when the starting point of the current frame is shifted forward by 20 ms. The previous subframe of the current frame may refer to a last subframe of a previous frame immediately adjacent to the current frame.
In the scheme, if the current frame is a non-speech frame, a background noise signal is generally borne, and the ITD parameter of the background noise signal generally has small fluctuation, the ITD parameter of the previous frame of the current frame can be directly determined as the ITD parameter of the current frame, so that the coding efficiency can be improved.
With reference to the first or second implementation manner of the first aspect, in a third implementation manner of the first aspect, the determining the ITD parameter of the current frame by using a second ITD parameter extraction manner includes: and determining the ITD parameter of the current frame according to the multi-channel signal.
With reference to the third implementation manner of the first aspect, in a fourth implementation manner of the first aspect, the determining the ITD parameter of the current frame according to the multi-channel signal includes: determining an initial ITD parameter of the current frame according to the multi-channel signal; and smoothing the initial ITD parameter of the current frame according to the ITD parameter of the previous frame or the previous subframe of the current frame to obtain the ITD parameter of the current frame.
Through smoothing processing, the influence of noise can be avoided, and the accuracy of ITD parameter extraction is improved.
With reference to the fourth implementation manner of the first aspect, in a fifth implementation manner of the first aspect, the smoothing, according to the ITD parameter of the previous frame or the previous subframe of the current frame, the initial ITD parameter of the current frame to obtain the ITD parameter of the current frame includes: according to Tsm=w1*Tsm [-1]+w2*T1Determining ITD parameters of the current frame, wherein T1An initial ITD parameter, T, representing the current framesmAn ITD parameter, T, representing the current framesm [-1]An ITD parameter, w, representing a previous frame or a previous subframe of said current frame1And w2Represents a smoothing factor, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
With reference to the third implementation manner of the first aspect, in a sixth implementation manner of the first aspect, the determining the ITD parameter of the current frame according to the multi-channel signal includes: determining initial ITD parameters of K sub-frames of the current frame according to the multi-channel signal, wherein K is an integer larger than 1; according to the ITD parameter of the previous subframe of each subframe in the K subframes, carrying out smoothing treatment on the initial ITD parameter of each subframe to obtain the ITD parameter of each subframe; determining the ITD parameters of the K sub-frames as the ITD parameters of the current frame.
It should be understood that the preceding subframe of each subframe described above may refer to the immediately preceding subframe of each subframe. Specifically, for the 1 st subframe of the K subframes, the previous subframe of the 1 st subframe is the last subframe of the immediately preceding frame to the current frame, and for the i (i > 2) th subframe of the K subframes, the previous subframe of the i subframe is the i-1 th subframe of the K subframes.
With reference to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect, the smoothing, according to the ITD parameter of a subframe previous to each subframe of the K subframes, the initial ITD parameter of each subframe to obtain the ITD parameter of each subframe includes: according to Tsm(j)=w1*Tsm(j-1)+w2Determining an ITD parameter of each sub-frame, wherein T (j) represents an initial ITD parameter of a j sub-frame of the K sub-frames, T (j)sm(j) An ITD parameter, T, representing the j sub-framesm(j-1) an ITD parameter, w, for a j-1 th subframe of the K subframes1And w2Represents a smoothing factor, j is an integer, and j is greater than or equal to 1 and less than or equal to K, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
With reference to the fifth or seventh implementation manner of the first aspect, in an eighth implementation manner of the first aspect, a value of the smoothing factor is determined based on a signal type of the current frame.
And determining a smoothing factor according to the signal type, so that the flexibility of extracting the ITD parameters can be further improved.
With reference to the third implementation manner of the first aspect, in a ninth implementation manner of the first aspect, the determining the ITD parameter of the current frame according to the multi-channel signal includes: generating a target frequency domain signal according to the multi-channel signal; performing frequency-time transformation on the target frequency domain signal to obtain a target time domain signal; and determining the ITD parameter of the current frame according to the target time domain signal.
In some implementations, the phase of the target frequency domain signal is linearly related to the IPD of the multi-channel signal. In some implementations, the phase of the target frequency domain signal is an IPD of the multi-channel signal. It is to be understood that the frequency-domain signal may be represented by a complex number, and the complex number may be represented by an amplitude and a phase, and the phase of the target frequency-domain signal may refer to a phase representing the complex number constituting the target frequency-domain signal.
In some implementations, the target frequency-domain signal may be a cross-correlation signal of the multi-channel frequency-domain signal.
In some implementations, the determining the ITD parameter of the current frame according to the target time domain signal includes: selecting a target sampling point from N sampling points of the target time domain signal, wherein the target sampling point is the sampling point with the largest sampling value in the N sampling points, and N represents the number of the sampling points of the target time domain signal; and determining the ITD parameter of the current frame according to an index value corresponding to the target sampling point, wherein the index value is used for indicating the sequencing of the target sampling point in the N sampling points. Alternatively, the index value is used to indicate that the target sample point is the second sample point of the N sample points. For example, the range of the index values of the N sampling points may be (-N/2, N/2), and if the target sampling point is the last sampling point of the N sampling points, the index value corresponding to the target sampling point is N/2.
With reference to the ninth implementation manner of the first aspect, in a tenth implementation manner of the first aspect, the generating a target frequency domain signal according to the multi-channel signal includes: determining the amplitude of the target frequency domain signal according to the multi-channel signal; determining IPD parameters of the current frame multi-channel signal according to the multi-channel signal; and generating the target frequency domain signal according to the amplitude of the target frequency domain signal and the IPD parameter of the current frame multi-channel signal.
With reference to the tenth implementation manner of the first aspect, in an eleventh implementation manner of the first aspect, the determining, according to the multi-channel signal, the amplitude of the target frequency-domain signal includes: according to
Figure BDA0000985628170000061
Determining the amplitude of the target frequency domain signal, wherein AM(k) Representing the amplitude, A, of the target frequency domain signal1(k) And A2(k) Respectively representing the amplitude of the frequency domain signals of any two sound channels in the multi-channel signals, k represents a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L represents the time-frequency transformation length adopted when the multi-channel signals are transformed from a time domain to a frequency domain.
With reference to the tenth or eleventh implementation manner of the first aspect, in a twelfth implementation manner of the first aspect, the generating the target frequency-domain signal according to the amplitude of the target frequency-domain signal and the IPD parameter of the current frame (specifically, the IPD parameter of a multi-channel signal in the current frame) includes: according to
Figure BDA0000985628170000062
Generating the target frequency domain signal, wherein AM(k) Representing the amplitude, X, of the target frequency domain signalM_real(k) Representing the real part, X, of the target frequency domain signalM_iamge(k) And (c) representing an imaginary part of the target frequency domain signal, IPD (k) representing the IPD parameter, k representing a frequency point, k being more than or equal to 0 and less than or equal to L/2, and L representing a time-frequency transformation length adopted when the multi-channel signal is transformed from a time domain to a frequency domain.
With reference to the ninth implementation manner of the first aspect, in a thirteenth implementation manner of the first aspect, the generating a target frequency domain signal according to the multi-channel signal includes: according to XM(k)=X1(k)*X* 2(k) Generating the target frequency domain signal, wherein XM(k) Representing the target frequency domain signal, X1(k) A frequency domain signal, X, representing a first channel of the multi-channel signal* 2(k) And representing the conjugate of the frequency domain signal of the second channel in the multi-channel signal, wherein k represents a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L represents the time-frequency transformation length adopted when the multi-channel signal is transformed from the time domain to the frequency domain.
With reference to the ninth implementation manner of the first aspect, in a fourteenth implementation manner of the first aspect, the generating a target frequency domain signal according to the multi-channel signal includes: according to XM(k)=X1(k)*X* 2(k) Determining the frequency domain signal XM(k) Wherein X is1(k) A frequency domain signal, X, representing a first channel of the multi-channel signal* 2(k) Representing the conjugate of the frequency domain signal of the second channel in the multi-channel signal, wherein k represents a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L represents the time-frequency transformation length adopted when the multi-channel signal is transformed from the time domain to the frequency domain; for the frequency domain signal XM(k) The amplitude value of the target frequency domain signal is normalized to obtain the target frequency domain signal.
With reference to any one of the first to fourteenth implementation manners of the first aspect, in a fifteenth implementation manner of the first aspect, the determining a frame type of the current frame according to the multi-channel signal includes: determining an energy of the multi-channel signal; determining the current frame as a non-speech frame under the condition that the energy of the multi-channel signal is less than or equal to a preset energy threshold value; and determining the current frame as a speech frame if the energy of the multi-channel signal is greater than the energy threshold.
With reference to the first implementation manner of the first aspect, in a sixteenth implementation manner of the first aspect, the method further includes: determining an initial ITD parameter of the current frame according to the multi-channel signal; the determining the ITD parameter of the current frame by adopting the first ITD parameter extraction mode comprises the following steps: determining the initial ITD parameter of the current frame as the ITD parameter of the current frame; the determining the ITD parameter of the current frame by using the second ITD parameter extraction method includes: and adjusting the initial ITD parameter of the current frame to obtain the ITD parameter of the current frame.
With reference to the sixteenth implementation manner of the first aspect, in a seventeenth implementation manner of the first aspect, the adjusting the initial ITD parameter of the current frame to obtain the ITD parameter of the current frame includes: and determining the ITD parameter of the current frame according to the frame type of the previous frame or the previous N frame of the current frame and the initial ITD parameter of the current frame, wherein N is an integer greater than 1.
The ITD parameter of the current frame is determined according to the frame type of the previous frame or the previous N frame of the current frame and the initial ITD parameter of the current frame, and the flexibility of extracting the ITD parameter can be improved.
With reference to the seventeenth implementation manner of the first aspect, in an eighteenth implementation manner of the first aspect, the determining the ITD parameter of the current frame according to a frame type of a previous frame or a previous N frame of the current frame and the initial ITD parameter of the current frame includes: and under the condition that the frame type of the previous frame or the previous N frame of the current frame is a speech frame, determining the ITD parameter of the current frame according to the ITD parameter of the previous frame of the current frame and the initial ITD parameter of the current frame.
In the scheme, if the previous frame or the previous N frames of the current frame are voice frames, the current frame is one of continuous voice frames, the ITD parameters between the continuous voice frames are related, the ITD parameters of the current frame are determined according to the ITD parameters of the previous frame of the current frame and the initial ITD parameters of the current frame, and the flexibility of extracting the ITD parameters can be improved.
With reference to the eighteenth implementation manner of the first aspect, in a nineteenth implementation manner of the first aspect, the determining the ITD parameter of the current frame according to the ITD parameter of the previous frame of the current frame and the initial ITD parameter of the current frame includes: determining the ITD parameter of the previous frame of the current frame as the ITD parameter of the current frame under the condition that the ITD parameter of the previous frame of the current frame is not a preset value and the initial ITD parameter of the current frame is a preset value; otherwise, determining the initial ITD parameter of the current frame as the ITD parameter of the current frame.
In the scheme, the current frame is one of continuous voice frames, ITD parameters of the continuous voice frames generally have small fluctuation, and ITD parameters of a previous frame of the current frame are determined as the ITD parameters of the current frame, so that calculation errors of the ITD parameters can be avoided, and the accuracy of extracting the ITD parameters is improved.
With reference to the eighteenth implementation manner of the first aspect, in a twentieth implementation manner of the first aspect, the determining the ITD parameter of the current frame according to the ITD parameter of the previous frame of the current frame and the initial ITD parameter of the current frame includes: under the condition that the ITD parameter of the previous frame of the current frame is not a preset value and the initial ITD parameter of the current frame is a preset value, if the number of the continuously calculated ITD parameters which are preset values is smaller than a preset threshold value, determining the ITD parameter of the previous frame of the current frame as the ITD parameter of the current frame; otherwise, determining the initial ITD parameter of the current frame as the ITD parameter of the current frame.
With reference to the sixteenth or seventeenth implementation manner of the first aspect, in an eighteenth implementation manner of the first aspect, the preset value is 0.
In a second aspect, there is provided an encoder comprising means capable of performing the steps of the method of encoding a multi-channel signal of the first aspect.
In a third aspect, there is provided an encoder comprising a memory for storing a program and a processor for executing the program, the processor performing the method of the first aspect when the program is executed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of PS encoding in the prior art.
Fig. 2 is a flowchart of PS decoding in the prior art.
Fig. 3 is an exemplary flowchart of a time domain-based ITD parameter extraction method in the related art.
Fig. 4 is an exemplary flowchart of a frequency domain-based ITD parameter extraction method in the related art.
Fig. 5 is a schematic flowchart of an encoding method of a multi-channel signal according to an embodiment of the present invention.
Fig. 6 is a schematic flowchart of an encoding method of a multi-channel signal according to an embodiment of the present invention.
Fig. 7 is a schematic flowchart of an encoding method of a multi-channel signal according to an embodiment of the present invention.
Fig. 8 is a schematic flowchart of an encoding method of a multi-channel signal according to an embodiment of the present invention.
Fig. 9 is an exemplary flowchart of a manner of extracting ITD parameters of a current frame.
Fig. 10 is an exemplary flowchart of a manner of extracting ITD parameters of a current frame.
Fig. 11 is a schematic configuration diagram of an encoder of the embodiment of the present invention.
Fig. 12 is a schematic configuration diagram of an encoder of the embodiment of the present invention.
Detailed Description
For ease of understanding, the meaning of multi-channel ILD, ITD, IPD is briefly introduced. Taking the signal picked up by the first microphone as the first channel signal and the signal picked up by the second microphone as the second channel signal as an example:
the ILD describes the difference in intensity between the first channel signal and the second channel signal; if the ILD is larger than 0, the energy of the first channel signal is higher than that of the second channel signal; if ILD equals 0, it means that the energy of the first channel signal equals the energy of the second channel signal; if the ILD is less than 0, it indicates that the energy of the first channel signal is less than the energy of the second channel signal;
the ITD describes the time difference between the first channel signal and the second channel signal, namely the time difference of the sound source reaching the first microphone and the second microphone, if the ITD is more than 0, the time of the sound source reaching the first microphone is earlier than the time of the sound source reaching the second microphone; if ITD equals 0, it indicates that the sound source arrives at the first microphone and the second microphone simultaneously; if the ITD is less than 0, it indicates that the sound source arrives at the first microphone later than the sound source arrives at the second microphone;
IPD describes the phase difference between the first channel signal and the second channel signal, which is usually combined with the ITD parameters to recover the phase information of the multi-channel signal at the decoding end.
In the prior art, the ITD parameter extraction methods are mainly divided into a time domain-based ITD parameter extraction method and a frequency domain-based ITD parameter extraction method, and for convenience of understanding, the two ITD parameter extraction methods are introduced with reference to fig. 3 and 4, respectively.
Fig. 3 is an exemplary flowchart of a time domain-based ITD parameter extraction method. The method of fig. 3 includes:
310. and extracting ITD parameters based on the left and right channel time domain signals.
Specifically, the ITD parameters may be extracted using a time-domain cross-correlation function based on the left and right channel time-domain signals, for example: and in the range of i being more than or equal to 0 and less than or equal to Tmax, calculating:
Figure BDA0000985628170000101
Figure BDA0000985628170000102
if it is not
Figure BDA0000985628170000103
Then T1Take max (C)n(i) ) corresponding index valueThe opposite of (d); otherwise T1Take max (C)p(i) A corresponding index value; where i is the index value for calculating the cross-correlation function, TmaxLength is the frame Length corresponding to the maximum value of ITD values at different sampling rates.
320. And carrying out quantization processing on the ITD parameters.
Fig. 4 is an exemplary flowchart of a frequency domain-based ITD parameter extraction method. The method of fig. 4 includes:
410. and performing time-frequency transformation on the left and right channel time domain signals to obtain left and right channel frequency domain signals.
Specifically, the time-frequency Transform may Transform a time-domain signal into a frequency-domain signal by using techniques such as Discrete Fourier Transform (DFT) and Modified Discrete Cosine Transform (MDCT).
For example, for the input time domain signals of the left and right channels, the time-frequency transform may adopt DFT transform, and specifically, DFT transform may adopt the following formula.
Figure BDA0000985628170000111
Wherein n is an index value of a sampling point of the time domain signal, k is an index value of a frequency point of the frequency domain signal, and L is a time-frequency transformation length. x (n) is a left channel time domain signal or a right channel time domain signal.
420. And extracting ITD parameters based on the left and right channel frequency domain signals.
Specifically, L Frequency bins (Frequency Bin) of the Frequency domain signal may be divided into N subbands, and for the b-th subband, the Frequency Bin included in the b-th subband is ab-1≤k≤Ab-1. In the search range-Tmax≤j≤TmaxThe amplitude can be calculated using the following formula:
Figure BDA0000985628170000112
the ITD parameter of the b-th sub-band may be
Figure BDA0000985628170000113
That is, the index value of the sample corresponding to the maximum value calculated by equation (4).
430. And carrying out quantization processing on the ITD parameters.
It should be understood that fig. 3 and 4 describe a general flow of the ITD parameters, and in practice, the ITD parameters may be extracted in units of frames, subframes, or sub-bands as appropriate, which is not particularly limited in the embodiment of the present invention. When the ITD parameter is extracted in units of frames, the ITD parameter of the current frame may be an ITD parameter; when the ITD parameters are extracted in units of subframes or subbands, the ITD parameters of the current frame may be a plurality of ITD parameters, that is, one ITD parameter corresponds to each subframe or each subband.
For example, in the time domain-based ITD parameter extraction method, the ITD parameter may be extracted in units of frames or subframes. For example, for a current frame of 20ms, time-frequency transformation may be performed by taking the current frame (i.e., 20ms) as a unit, and the ITD parameters of the current frame are extracted; under the condition that the current frame is divided into 2 subframes, time-frequency transformation can be carried out by taking the subframes (namely 10ms) as units, and ITD parameters corresponding to each subframe are extracted; in the case where the current frame is divided into 4 subframes, time-frequency transform may be performed in units of subframes (i.e., 5ms), and the ITD parameter corresponding to each subframe may be extracted.
For another example, in the frequency domain-based ITD parameter extraction method, the ITD parameters may be extracted in units of frames or subframes. When a frame or subframe is further divided into a plurality of subbands, ITD parameters may also be extracted in units of subbands.
In the prior art, once the extraction mode of the ITD parameters is determined, the extraction modes of the ITD parameters of all frames of the multi-channel signal are fixed, and cannot be flexibly adjusted according to actual conditions. However, different frames of the multi-channel signal have different characteristics, for example, some frames contain speech signals and some frames contain background noise signals; the speech signal in some frames is unvoiced and the speech signal in some frames is voiced; some frames have high energy and some frames have low energy. Different types of frames or different types of signals of the multi-channel signal can adopt the same or different ITD parameter extraction modes, for example, for a background noise signal, the ITD parameters of the multi-channel signal generally do not change greatly within a certain time range, and if the ITD parameters of the background noise signal are repeatedly calculated according to frames, coding resources are wasted, and coding efficiency is reduced.
In order to improve the flexibility of the ITD parameter extraction, the method for encoding a multi-channel signal according to an embodiment of the present invention is described in detail below with reference to fig. 5.
Fig. 5 is a schematic flowchart of an encoding method of a multi-channel signal according to an embodiment of the present invention. The encoding method of the multi-channel signal of fig. 5 includes:
510. a current frame containing a multi-channel signal is acquired.
In some embodiments, the multi-channel signal may be a multi-channel time domain signal; in some embodiments, the multi-channel signal may be a multi-channel frequency domain signal.
520. From the multi-channel signal, feature information is determined.
The specific type of the feature information is not limited in the embodiments of the present invention, and in some embodiments, the feature information may be used to indicate a feature of the multi-channel signal. In some embodiments, the characteristic information may include at least one of a frame type and a signal type of the current frame, and the frame type may include a speech frame and/or a non-speech frame; the signal type may include unvoiced and/or voiced sounds. In some embodiments, the speech frames are frames containing speech signals. In some embodiments, the non-speech frames may also be referred to as background frames. The signal in the background frame may be, for example, a background noise signal. In addition, the embodiment of the present invention does not limit the specific names of the Voice frame and the non-Voice frame, for example, in Voice Activity Detection (VAD) Detection, a frame including a Voice signal may be referred to as a Voice active frame (or active frame); the non-speech frames may be referred to as voice inactive frames (or inactive frames). The following description will take the example of a speech frame as a speech active frame and a non-speech frame as a speech inactive frame.
The embodiment of the present invention does not limit the specific manner of determining the signal type of the multi-channel signal according to the multi-channel signal. In some embodiments, when a Zero Crossing Rate (ZCR) of the multi-channel time domain signal is greater than a preset threshold, the signal type of the multi-channel signal is unvoiced (or the current frame is an unvoiced frame); otherwise, the signal type of the multi-channel signal is voiced (or the current frame is a voiced frame). In other embodiments, the signal type of the multi-channel signal is voiced (or the current frame is a voiced frame) when a correlation value of the multi-channel signal (describing the correlation of the multi-channel signal) is greater than a preset threshold; otherwise, the signal type of the multi-channel signal is unvoiced (or the current frame is an unvoiced frame).
530. And determining the ITD parameters of the current frame according to the characteristic information.
Specifically, the ITD parameter of the current frame may be determined according to the frame type of the current frame. For example, different ITD parameter extraction methods are used for the voice active frame and the voice inactive frame. Alternatively, the ITD parameter of the current frame may be determined according to a signal type of the multi-channel signal. For example, different ITD parameter extraction methods are used for unvoiced and voiced signals. The following detailed description will be given in conjunction with specific examples, which are not described in detail herein.
540. Encoding the ITD parameters.
In some embodiments, the method of fig. 5 may further include: and sending the encoded ITD parameters to a decoding end.
In some embodiments, step 520 may include: determining the frame type of the current frame according to the multi-channel signal; step 530 may include: under the condition that the current frame is a non-voice frame, determining an ITD parameter of the current frame by adopting a first ITD parameter extraction mode; and under the condition that the current frame is a voice frame, determining the ITD parameter of the current frame by adopting a second ITD parameter extraction mode.
It should be understood that the embodiment of the present invention does not specifically limit the manner of determining the frame type of the current frame according to the multi-channel signal. For example, the frame type of the current frame may be determined based on the VAD.
It should also be understood that the first ITD parameter extraction manner and the second ITD parameter extraction manner are not particularly limited in the embodiments of the present invention, as long as the first ITD parameter extraction manner and the second ITD parameter extraction manner are different.
In some embodiments, the first ITD parameter extraction manner may be to determine ITD parameters of a previous frame or a previous subframe of the current frame as the ITD parameters of the current frame.
In some embodiments, the second ITD parameter extraction may be to determine the ITD parameters of the current frame according to the multi-channel signal. For example, the ITD parameters of the current frame can be extracted in a time domain and frequency domain based manner in the prior art. Or, the extracted initial ITD parameters may be smoothed on the basis of the prior art to obtain the ITD parameters of the current frame. Alternatively, the ITD parameters may be extracted in a mixed domain (time domain and frequency domain) based manner according to the embodiment of the present invention, and the ITD parameters based on the mixed domain will be described in detail later, and will not be described again here.
The following description will take a multi-channel signal as a left and right channel signal as an example, but the embodiment of the present invention is not limited thereto. In practice, the solution in the present application may be applied to process any two channels of a two-channel or multi-channel signal, in a multi-channel scenario, the left and right channels in the following may be any two channels of the multi-channel.
Fig. 6 is a schematic flowchart of an encoding method of a multi-channel signal according to an embodiment of the present invention. It should be understood that the process steps or operations shown in fig. 6 are merely examples, and other operations or variations of the various operations in fig. 6 may also be performed by embodiments of the present invention. Moreover, the various steps in FIG. 6 may be performed in a different order presented in FIG. 6, and it is possible that not all of the operations in FIG. 6 may be performed. The method of FIG. 6 includes:
610. the frame type of the current frame is detected.
Specifically, VAD may be performed on the current frame, and whether the current frame is a voice active frame or a voice inactive frame may be determined according to the detection result.
620. And judging whether the current frame is a voice activated frame.
If not, step 630 may be executed; if it is a voice active frame, step 640 may be performed.
630. And determining the ITD parameters of the current frame by adopting a first ITD parameter extraction mode.
In some embodiments, the first ITD parameter extraction manner may include: and determining the ITD parameters of the previous frame or the previous subframe of the current frame as the ITD parameters of the current frame.
640. And determining the ITD parameters of the current frame by adopting a second ITD parameter extraction method.
Alternatively, in some embodiments, the ITD parameters of the current frame may be extracted in the manner described in fig. 3, that is, in the time domain.
Alternatively, in some embodiments, the ITD parameters of the current frame may be extracted in the manner described in fig. 4, that is, the ITD parameters of the current frame are extracted in the frequency domain.
Alternatively, in some embodiments, the ITD parameters of the current frame may be extracted in the mixed domain, and the manner of extracting the ITD parameters in the mixed domain according to the embodiment of the present invention will be described in detail below with reference to fig. 7 and 8, and will not be described in detail herein.
Optionally, in some embodiments, the initial ITD parameters of the current frame may be extracted first; and then, smoothing the initial ITD parameter of the current frame to obtain the ITD parameter of the current frame.
It should be understood that, in the embodiment of the present invention, a manner of extracting the initial ITD parameter of the current frame is not particularly limited.
Optionally, as an implementation manner, the initial ITD parameters of the current frame may be extracted in a manner described in fig. 3, that is, the initial ITD parameters of the current frame are extracted in a time domain.
Optionally, as an implementation manner, the ITD parameters of the current frame may be extracted in a manner described in fig. 4, that is, initial ITD parameters of the current frame are extracted in a frequency domain.
Optionally, as an implementation manner, initial ITD parameters of the current frame may be extracted in the mixed domain, and a detailed description will be given below with reference to fig. 7 and 8 on a manner of extracting ITD parameters in the mixed domain according to an embodiment of the present invention.
After extracting the initial ITD parameters of the current frame, the following formula may be adopted for smoothing:
Tsm=w1*Tsm [-1]+w2*T1 (5)
wherein, Tsm [-1]A smoothed value of the ITD parameter for a frame previous to the current frame; for the smoothing factor w1、w2May be set to a constant, e.g. w1=0.75,w20.25, or w1=0.8,w20.2, or w1=0.9,w20.1, etc.; or according to Tsm [-1]And T1The magnitude relation of (a) is set to different factors; or different smoothing factors can be adopted to carry out smoothing processing by combining the signal types in the current frame. For example, a smaller smoothing factor may be used for unvoiced frames and a larger smoothing factor may be used for voiced frames. Furthermore, w1、w2Satisfies the relationship of (1)1+w2=1。
Or, if the current frame is divided into K subframes, each subframe may correspond to an initial ITD parameter (the extraction manner of the ITD parameter of a subframe is similar to that of the frame, and may also be divided into extraction manners based on a time domain, a frequency domain, and a mixed domain, which are repeated here), and the initial ITD parameter of each subframe may be smoothed by using the following formula:
Tsm(j)=w1*Tsm(j-1)+w2*T(j) (6)
wherein, Tsm(j-1) is a smoothed value of the ITD parameter of the previous subframe; for the smoothing factor w1、w2May be set to a constant, e.g. w1=0.75,w20.25, or w1=0.8,w20.2, or w1=0.9,w20.1, etc.; or according to TsmThe magnitude relation of (j-1) and T (j) is set to different factors; or different smoothing factors can be adopted to carry out smoothing processing by combining the signal types in the current frame. For example, a smaller smoothing factor may be used for unvoiced frames and a larger smoothing factor may be used for voiced frames. Furthermore, w1、w2Satisfies the relationship of (1)1+w2=1。
In some embodiments, the smoothing process may be implemented at the encoding end or at the decoding end.
The ITD parameter extraction method based on the hybrid domain according to the embodiment of the present invention is described in detail below with reference to fig. 7 and 8. The ITD parameter extraction approach described in fig. 7 and 8 can be used to extract the ITD parameters of the current frame; in addition, in the embodiment that needs to be smoothed, the ITD parameter extraction manner described in fig. 7 and 8 may also be used to extract the initial ITD parameters of the current frame. The ITD parameter implementation of fig. 7 and 8 constructs a target frequency domain signal in the frequency domain, where the phase of the target frequency domain signal is IPD of the multi-channel signal, so that when the target frequency domain signal is converted to the time domain to obtain a target time domain signal, the ITD parameter of the current frame is located at the index value corresponding to the sampling point with the largest sampling value of the target time domain signal. Fig. 7 and 8 differ in the way in which the target frequency domain signal is constructed.
Fig. 7 is a schematic flowchart of an encoding method of a multi-channel signal according to an embodiment of the present invention. In the embodiment corresponding to fig. 7, the target frequency domain signal is a frequency domain signal constructed by calculating the amplitude of the monaural frequency domain signal and the IPDs of the left and right channel signals by frequency points. It should be understood that the process steps or operations illustrated in fig. 7 are merely examples, and other operations or variations of the various operations in fig. 7 may also be performed by embodiments of the present invention. Moreover, the various steps in FIG. 7 may be performed in a different order than presented in FIG. 7, and it is possible that not all of the operations in FIG. 7 may be performed.
710. And respectively carrying out time-frequency transformation on the time domain signals of the left and right sound channels to obtain frequency domain signals of the left and right sound channels.
Specifically, the time domain signals of the left and right channels may be subjected to Discrete Fourier Transform (DFT) using equations (7) and (8):
Figure BDA0000985628170000171
Figure BDA0000985628170000172
wherein x isL(n) and xR(n) are time domain signals of left and right sound channels respectively, Length is frame Length or subframe Length, k is an index value of frequency points of the frequency domain signals, and L is time frequency conversion Length.
In order to improve the coding efficiency, Fast Fourier Transform (FFT) may be used to perform time-frequency Transformation, where the frequency domain signal obtained after the time-frequency Transformation is a complex signal including a real part and an imaginary part, and the real part of the frequency domain signal of the left channel is XL_real(k) Imaginary part of XL_image(k) (ii) a For the frequency domain signal of the right channel, the real part is XR_real(k) Imaginary part of XR_image(k) Wherein
Figure BDA0000985628170000173
specifically, taking the frequency domain signal of the left channel as an example, the values of the real part and the imaginary part thereof may be calculated as follows:
XL_real(0)=XL(0),XL_image(0)=0 (9)
Figure BDA0000985628170000174
Figure BDA0000985628170000181
or,
XL_real(0)=XL(0),XL_image(0)=0 (12)
Figure BDA0000985628170000182
Figure BDA0000985628170000183
it should be noted that after the time-frequency transformation, for a wideband signal (WB signal), if the length of the time-frequency transformation is 512, the obtained frequency domain signal includes 256 frequency points, where the 256 frequency point corresponds to a spectrum of 8kHz, the 128 frequency point corresponds to a spectrum of 4kHz, and so on.
720. And carrying out frequency domain coefficient processing on the frequency domain signals of the left and right sound channels to obtain target frequency domain signals.
In some embodiments, the amplitude a of the target frequency domain signal may be calculated frequency point by frequency pointM(k) And inter-channel phase difference IPD (k), wherein k is a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L is a time-frequency transformation length adopted when the time domain signals of the left and right channels are transformed into the frequency domain signals of the left and right channels.
Specifically, the amplitude a of the target frequency domain signal may be calculated firstM(k):
Figure BDA0000985628170000184
The amplitude of the left channel frequency domain signal may be:
Figure BDA0000985628170000185
the amplitude of the right channel frequency domain signal may be:
Figure BDA0000985628170000186
then, ipd (k) of the left and right channel signals can be calculated:
IPD(k)=∠L(k)*R*(k),k1≤k≤k2 (18)
wherein k represents frequency point, L (k) and R (k) are respectively the kth frequency point value of the left and right sound channel frequency domain signal, the frequency point value comprises a real part and an imaginary part, R*(k) Representing the conjugate of the kth frequency-point value of the right channel frequency-domain signal, the real and imaginary parts of L (k) and R (k) may be based on XL(k) And XR(k) Constructed, this equation (18) can be further organized as:
Figure BDA0000985628170000191
wherein:
A′(k)=XL_real(k)*XR_real(k)+XL_image(k)*XR_image(k) (20)
A″(k)=XL_image(k)*XR_real(k)-XL_real(k)*XR_image(k) (21)
then, after obtaining the amplitude of the target frequency domain signal and the phase difference of the left and right channel signals, further processing to obtain the target frequency domain signal:
XM_real(k)=AM(k)*cos(IPD(k)) (22)
XM_iamge(k)=AM(k)*sin(IPD(k)) (23)
in some embodiments, after obtaining the amplitude of the target frequency domain signal and the IPD of the left and right channel signals, a table lookup method may be used to obtain the target frequency domain signal, for example, a sin function table and a cos function table are set, and the table lookup method is used to obtain the target frequency domain signal, which may effectively reduce the computational complexity of the algorithm.
730. And carrying out frequency-time transformation on the target frequency domain signal to obtain a target time domain signal.
In some embodiments, the target frequency domain signal may be windowed and subjected to an Inverse Discrete Fourier Transform (IDFT).
Specifically, the target frequency domain signal may be windowed first:
Figure BDA0000985628170000192
Figure BDA0000985628170000193
k is a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L is a time-frequency transformation length adopted when the time domain signals of the left and right sound channels are transformed into the frequency domain signals of the left and right sound channels.
Then, performing IDFT on the windowed signal to obtain a target time domain signal:
Figure BDA0000985628170000194
Figure BDA0000985628170000195
wherein n is an index value of a sampling point of the time domain signal, and n is more than or equal to 0 and less than L/2.
In some embodiments, step 730 may use IDFT for frequency-time transformation, and may also use Inverse Fast Fourier Transform (IFFT) for frequency-time transformation.
In some embodiments, the frequency-time transformation may be performed only in a specific frequency domain range without performing the frequency-time transformation on all frequency points, so that the computational complexity of the algorithm may be effectively reduced. For example, frequency-time transformation may be performed within a frequency bin range [ k3, k4], where k3>0 and k4< L/2.
740. And smoothing the amplitude of the target time domain signal.
Specifically, the amplitude of the target time domain signal can be represented by the following formula:
Figure BDA0000985628170000201
smoothing the amplitude of the target time domain signal to obtain an amplitude smoothing value Asm(n):
Figure BDA0000985628170000202
Wherein,
Figure BDA0000985628170000203
the amplitude smoothing value of the nth point of the previous frame/subframe of the current frame is obtained; w is a1、w2The smoothing factor can be set to be constant or follow
Figure BDA0000985628170000204
And a (n) while satisfying w1+w21. For example, w may be set1=0.75,w20.25, or w1=0.8,w20.2, or w1=0.9,w20.1, or
Figure BDA0000985628170000205
750. And searching an index value corresponding to the sampling point with the maximum sampling value of the smoothed target time domain signal to obtain the ITD parameter.
Specifically, the index value index corresponding to the maximum sampling point of the smoothed time domain signal is searched for, which is argmax (a)sm(n)), the ITD parameter is index.
As can be seen from equations (22) and (23), the phases of the target frequency-domain signals obtained after the frequency-domain coefficient processing are IPDs of the first channel and the second channel. Further, since there is a linear relationship between IPD and ITD, the target frequency domain signal can be approximately rewritten as follows:
Figure BDA0000985628170000206
Figure BDA0000985628170000207
after the frequency-time conversion is performed on the target frequency domain signal, the index value corresponding to the sampling point with the largest sampling value of the target time domain signal is at the ITD.
Fig. 8 is a schematic flowchart of an encoding method of a multi-channel signal according to an embodiment of the present invention. In the embodiment corresponding to fig. 8, the target frequency-domain signal is mainly a frequency-domain signal constructed based on a conjugate of a signal of one channel and a signal of the other channel of the left and right channel signals. It should be understood that the process steps or operations illustrated in fig. 8 are merely exemplary of other operations that may be performed by embodiments of the present invention or variations of the various operations in fig. 8. Moreover, the various steps in FIG. 8 may be performed in a different order presented in FIG. 8, and it is possible that not all of the operations in FIG. 8 may be performed. In addition, each step in fig. 8 corresponds to each step in fig. 7, except that the processing manner of step 820 is different from that of step 720, and other steps may refer to fig. 7, and are not described in detail here.
810. And respectively carrying out time-frequency transformation on the time domain signals of the left and right sound channels to obtain frequency domain signals of the left and right sound channels.
820. And obtaining a target frequency domain signal by conjugate multiplication of the frequency domain signal of one sound channel and the frequency domain signal of the other sound channel in the left and right sound channel signals.
It will be appreciated that the phase of the frequency domain signal obtained by multiplying the conjugate of the frequency domain signal of one channel with the frequency domain signal of the other channel is the IPD of both channels.
Specifically, the target frequency domain signal XM(k) Can be calculated by the following formula:
XM(k)=L(k)*R*(k) (32)
wherein, L (k) and R (k) are respectively the kth frequency point value of the left and right sound channel frequency domain signal, the frequency point value comprises a real part and an imaginary part, R*(k) Representing the conjugate of the kth frequency-point value of the right channel frequency-domain signal, the real and imaginary parts of L (k) and R (k) may be based on XL(k) And XR(k) And (5) constructing.
Or XM(k)=R(k)*L*(k) (33)
Wherein R (k) is the k-th frequency point value of the frequency domain signal of the right channel, L*(k) And k is the conjugate of the kth frequency point value of the frequency domain signal of the left sound channel, and is more than or equal to 0 and less than or equal to L/2.
In some embodiments, X is obtainedM(k) Then, X can be further pairedM(k) And carrying out normalization processing to obtain a target frequency domain signal.
In particular, canFirst calculate XM(k) Maximum amplitude of (d):
Figure BDA0000985628170000221
then to XM(k) Normalizing the amplitude value:
Figure BDA0000985628170000222
Figure BDA0000985628170000223
830. and carrying out frequency-time transformation on the target frequency domain signal to obtain a target time domain signal.
840. And smoothing the amplitude of the target time domain signal.
850. And searching an index value corresponding to the sampling point with the maximum sampling value of the smoothed target time domain signal to obtain the ITD parameter.
With continued reference to fig. 6, the implementation of step 610 in fig. 6 may be various, for example, the frame type of the current frame may be detected in the time domain; the frame type of the current frame can also be detected in the frequency domain.
In some embodiments, a VAD detection algorithm may be employed to detect the frame type of the current frame. Specifically, the frame type of the current frame may be detected based on the energy of the signal in the current frame. The energy-based frame type detection is illustrated below with reference to fig. 9.
Fig. 9 is an exemplary flowchart of a manner of extracting ITD parameters of a current frame. Fig. 9 mainly performs VAD detection on the current frame based on the energy of the signal in the current frame to determine whether the current frame is a voice active frame or a voice inactive frame. It should be understood that the process steps or operations illustrated in fig. 9 are merely exemplary of other operations that may be performed by embodiments of the present invention or variations of the various operations in fig. 9. Moreover, the various steps in FIG. 9 may be performed in a different order presented in FIG. 9, and it is possible that not all of the operations in FIG. 9 may be performed.
910. And performing time-frequency transformation on the time domain signals of the left and right sound channels.
Specifically, Fast Fourier Transform (FFT) Transformation may be performed on the time domain signals of the left and right channels, respectively, to obtain frequency domain signals of the left and right channels:
Figure BDA0000985628170000224
Figure BDA0000985628170000231
wherein x isL(n) and xR(n) respectively representing time domain signals of left and right sound channels, k is an index value of frequency points of the frequency domain signals, Length is a frame Length, and L is a time-frequency transformation Length.
The complex signal obtained after FFT contains a real part and an imaginary part, and the real part of the frequency domain signal of the left channel is XL_real(k) Imaginary part of XL_image(k) (ii) a For the real part of the right channel signal as XR_real(k) Imaginary part of XR_image(k) Wherein k is more than or equal to 0 and less than or equal to L/2.
In some embodiments, XL_real(k)、XL_image(k) The value mode (X) described by the following formula can be adoptedR_real(k)、XR_image(k) The same applies to the values, and is not described here again):
XL_real(0)=XL(0),XL_image(0)=0 (39)
Figure BDA0000985628170000232
Figure BDA0000985628170000233
or:
XL_real(0)=XL(0),XL_image(0)=0 (42)
Figure BDA0000985628170000234
Figure BDA0000985628170000235
920. the energy of the current frame/subframe is calculated.
Specifically, the energy of the current frame/subframe can be calculated according to the following formula:
Figure BDA0000985628170000236
then, the energy E of the current frame/sub-frame can be judgedtotWhether it is greater than a preset VAD threshold EVAD(ii) a Wherein E isVADThe energy of the current frame/subframe can be set to be a fixed value or can be adaptively adjusted according to the energy of the current frame/subframe.
If E istot≤EVADStep 930 may be performed; if E istot≥EVADStep 940 may be performed.
930. And extracting the ITD parameters of the current frame/subframe by adopting a first ITD parameter extraction mode.
The first ITD parameter extraction method may be: the ITD value of the previous frame/subframe of the current frame is maintained.
940. And extracting the ITD parameters of the current frame/subframe by adopting a second ITD parameter extraction mode.
An implementation of the second ITD parameter extraction may refer to step 640 of fig. 6. The following description is still given by way of example of mixed domain-based ITD parameter extraction.
Step 1, frequency domain coefficient processing can be performed by combining the energy of the current frame/subframe.
Specifically, suppose the energy of the k-th frequency point of the current frame/subframe is E (k), if E (k) L ≦ EtotThe current frequency point of the target frequency domain signal can be set to 0; otherwise, the amplitude and IPD of the current frequency point of the target frequency domain signal may be calculated, and a method similar to that described in fig. 7 is employedAnd (3) processing to obtain a target frequency domain signal, wherein L is a time-frequency transformation length adopted when the time domain signals of the left and right channels are transformed into the frequency domain signals of the left and right channels.
Specifically, the amplitude a can be calculated using the following formulaM(k):
Figure BDA0000985628170000241
Wherein, the amplitude of the left sound channel frequency domain signal at the kth frequency point is:
Figure BDA0000985628170000242
the amplitude of the right channel frequency domain signal at the kth frequency point is:
Figure BDA0000985628170000243
next, the inter-channel phase difference ipd (k) of the left and right channel signals can be calculated using the following formula:
IPD(k)=∠L(k)*R*(k) (47)
wherein, L (k) and R (k) are respectively the kth frequency point value of the left and right sound channel frequency domain signal, the frequency point value comprises a real part and an imaginary part, R*(k) Representing the conjugate of the kth frequency-point value of the right channel frequency-domain signal, the real and imaginary parts of L (k) and R (k) may be based on XL(k) And XR(k) And (5) constructing.
Then, the target frequency domain signal is constructed such that the phase of the target frequency domain signal is linearly related to the IPDs of the left and right channel signals.
Specifically, the target frequency domain signal may be constructed using the following formula:
Figure BDA0000985628170000244
and Step 2, performing frequency-time conversion on the target frequency domain signal to obtain a target time domain signal.
In some embodiments, the target time domain signal may be obtained by performing windowing and IDFT transformation on the target frequency domain signal.
Specifically, the target frequency domain signal may be windowed by using the following formula:
Figure BDA0000985628170000251
further, IDFT transform may be performed on the windowed frequency domain signal by using the following formula to obtain a target time domain signal:
Figure BDA0000985628170000252
wherein n is more than or equal to 0 and less than L/2.
In addition, in some embodiments, the amplitude a (n) of the target time domain signal may be smoothed to obtain an amplitude smoothed value asm(n) of (a). For example, different smoothing factors may be used in conjunction with the signal type for smoothing. For example, for unvoiced frames, a smaller smoothing factor is used, and for voiced frames, a larger smoothing factor is used.
Specifically, the target time domain signal amplitude a (n) may be calculated using the following formula:
Figure BDA0000985628170000253
then, smoothing A (n) by the following formula to obtain an amplitude smoothing value Asm(n):
Figure BDA0000985628170000254
Wherein,
Figure BDA0000985628170000255
representing the amplitude smooth value of the nth point of the previous frame/subframe of the current frame; for the smoothing factor w1、w2Can be set as constant or according to
Figure BDA0000985628170000256
And the magnitude relation of A (n) is set to different factors, w1、w2Satisfy w1+w21. For example, w may be set1=0.75,w20.25, or w1=0.8,w20.2, or w1=0.9,w20.1, or
Figure BDA0000985628170000257
Figure BDA0000985628170000258
And Step 3, determining the ITD parameter of the current frame or the current subframe according to the index value corresponding to the sampling point with the maximum sampling value of the target time domain signal.
Specifically, the index value corresponding to the sampling point with the largest sampling value of the target frequency domain signal may be determined as the ITD parameter of the current frame or the current subframe. For example, the index value index corresponding to the sample point with the largest sample value of the smoothed time domain signal may be searched for as arg (max (a)sm(n))) to obtain the ITD parameters: ITD is index.
Alternatively, the index value corresponding to the sampling point with the largest sampling value of the target frequency domain signal may be transformed (e.g., normalized, scaled, etc.), and the transformed value may be determined as the ITD parameter of the current frame or the current subframe.
In the implementation described above, if the current frame is a non-speech frame, the ITD parameter of the previous frame or the previous subframe of the current frame may be determined as the ITD parameter of the current frame, but the embodiment of the present invention is not limited thereto. For example, if the current frame is a non-speech frame, the ITD parameters of the current frame may be extracted in the time domain, the frequency domain, or the mixed domain; if the current frame is a speech frame and the previous frame of the current frame is also a speech activation frame (that is, the current frame is one of continuous speech frames), since ITD parameters of the continuous speech frames generally do not have large fluctuation, if the ITD parameters of the previous frame of the current frame are not preset values and the calculation result of the ITD parameters of the current frame is a preset value (the preset value may be 0, for example), which may be caused by an ITD parameter calculation error of the current frame, it may be considered to determine the ITD parameters of the previous frame or the previous subframe of the current frame as the ITD parameters of the current frame. This implementation is described in detail below in conjunction with fig. 10.
Fig. 10 is an exemplary flowchart of a manner of extracting ITD parameters of a current frame. It should be understood that the process steps or operations illustrated in fig. 10 are merely examples, and other operations or variations of the various operations in fig. 10 may also be performed by embodiments of the present invention. Moreover, the various steps in FIG. 10 may be performed in a different order presented in FIG. 10, and it is possible that not all of the operations in FIG. 10 may be performed.
1010. And performing time-frequency transformation on the time domain signals of the left and right sound channels.
This step is similar to step 910, and reference may be made to step 910, which is not described in detail herein to avoid repetition.
1020. It is determined whether the current frame is a voice activated frame.
In particular, VAD detection may be performed based on the frequency domain signals of the left and right channels. If the current frame is a voice inactive frame, go to step 1030; if the current frame is a voice activated frame, step 1040 is performed.
1030. And extracting the ITD parameters of the current frame by adopting a first ITD parameter extraction mode.
Specifically, the ITD parameters of the current frame may be calculated according to a frequency domain cross-correlation algorithm based on the left and right channel frequency domain coefficients. The frequency domain cross-correlation algorithm can be implemented by the following formula:
Figure BDA0000985628170000271
wherein, L (k) and R (k) are respectively the kth frequency point value of the left and right sound channel frequency domain signal, the frequency point value comprises a real part and an imaginary part, R*(k) Representing the conjugate of the kth frequency-point value of the right channel frequency-domain signal, the real and imaginary parts of L (k) and R (k) may be based on XL(k) And XR(k) And (5) constructing.
1040. And extracting the ITD parameters of the current frame by adopting a second ITD parameter extraction mode.
Specifically, the ITD parameter calculated for the current frame may be adjusted based on the left and right channel frequency domain signals, in combination with the ITD parameter of the previous frame of the current frame and/or the number of ITD parameters calculated for consecutive zeros.
Optionally, as an implementation manner, when it is known through VAD detection that a current frame is a continuous speech frame (that is, a previous frame or previous frames of the current frame are all speech frames), if an ITD parameter of the previous frame of the current frame is not a preset value (a preset value may be 0, for example), and the ITD parameter of the current frame is a preset value, the ITD parameter of the previous frame of the current frame may be used as the ITD parameter of the current frame; otherwise, the initial ITD parameters of the current frame may be determined as the ITD parameters of the current frame.
Optionally, as another implementation manner, when it is known through VAD detection that a current frame is a continuous speech frame (that is, a previous frame or previous frames of the current frame are all speech frames), if an ITD parameter of the previous frame of the current frame is not a preset value (a preset value may be 0, for example), but an ITD parameter of the current frame is a preset value, and when the number of ITD parameters (including the ITD parameter of the current frame) obtained through continuous calculation, which are preset values, is less than a preset threshold value, the ITD parameter of the previous frame of the current frame is used as the ITD parameter of the current frame, and a count value of the ITD parameters continuously being the preset value is increased; otherwise, the initial ITD parameters of the current frame may be determined as the ITD parameters of the current frame.
The method of encoding a multi-channel signal according to an embodiment of the present invention is described in detail above with reference to fig. 5 to 10. An encoder according to an embodiment of the present invention is described in detail below with reference to fig. 11 to 12. It should be understood that the encoder corresponding to fig. 11 or fig. 12 can perform the steps in fig. 5 to fig. 10, and in order to avoid repetition, the details are not described here.
Fig. 11 is a schematic configuration diagram of an encoder of the embodiment of the present invention. The encoder 1100 of fig. 11 includes:
an obtaining unit 1110 configured to obtain a current frame including a multi-channel signal;
a first determining unit 1120, configured to determine feature information according to the multi-channel signal, wherein the feature information includes at least one of a frame type and a signal type of the current frame, the frame type includes a speech frame and/or a non-speech frame, and the signal type includes an unvoiced sound and/or a voiced sound;
a second determining unit 1130, configured to determine an inter-channel time difference ITD parameter of the current frame according to the feature information;
an encoding unit 1140 for encoding the ITD parameters.
Optionally, as an embodiment, the first determining unit 1110 is specifically configured to determine a frame type of the current frame according to the multi-channel signal; the second determining unit 1120 is specifically configured to determine the ITD parameter of the current frame by using a first ITD parameter extraction method when the current frame is a non-speech frame; and under the condition that the current frame is a voice frame, determining the ITD parameter of the current frame by adopting a second ITD parameter extraction mode.
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine an ITD parameter of a previous frame or a previous subframe of the current frame as the ITD parameter of the current frame.
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine the ITD parameter of the current frame according to the multi-channel signal.
The second determining unit 1120 is specifically configured to generate a target frequency domain signal according to the multi-channel signal; performing frequency-time transformation on the target frequency domain signal to obtain a target time domain signal; and determining the ITD parameter of the current frame according to the target time domain signal.
The second determining unit 1120 is specifically configured to determine an amplitude of the target frequency domain signal according to the multi-channel signal; determining IPD parameters of the current frame multi-channel signal according to the multi-channel signal; and generating the target frequency domain signal according to the amplitude of the target frequency domain signal and the IPD parameter of the current frame multi-channel signal.
The second determining unit 1120 is specifically configured to
Figure BDA0000985628170000291
Determining the amplitude of the target frequency domain signal, wherein AM(k) Representing the amplitude, A, of the target frequency domain signal1(k) And A2(k) Respectively representing the amplitude of the frequency domain signals of any two sound channels in the multi-channel signals, k represents a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L represents the time-frequency transformation length adopted when the multi-channel signals are transformed from a time domain to a frequency domain.
The second determining unit 1120 is specifically configured to
Figure BDA0000985628170000292
Generating the target frequency domain signal, wherein AM(k) Representing the amplitude, X, of the target frequency domain signalM_real(k) Representing the real part, X, of the target frequency domain signalM_iamge(k) And (c) representing an imaginary part of the target frequency domain signal, IPD (k) representing the IPD parameter, k representing a frequency point, k being more than or equal to 0 and less than or equal to L/2, and L representing a time-frequency transformation length adopted when the multi-channel signal is transformed from a time domain to a frequency domain.
The second determining unit 1120 is specifically configured to determine according to XM(k)=X1(k)*X* 2(k) Generating the target frequency domain signal, wherein XM(k) Representing the target frequency domain signal, X1(k) A frequency domain signal, X, representing a first channel of the multi-channel signal* 2(k) And representing the conjugate of the frequency domain signal of the second channel in the multi-channel signal, wherein k represents a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L represents the time-frequency transformation length adopted when the multi-channel signal is transformed from the time domain to the frequency domain.
The second determining unit 1120 is specifically configured to determine according to XM(k)=X1(k)*X* 2(k) Determining the frequency domain signal XM(k) Wherein X is1(k) A frequency domain signal, X, representing a first channel of the multi-channel signal* 2(k) Representing the conjugate of the frequency domain signal of the second channel in the multi-channel signal, k representing the frequency point, k being more than or equal to 0 and less than or equal to L/2, L representing the time-frequency transformation adopted when transforming the multi-channel signal from the time domain to the frequency domainA length; for the frequency domain signal XM(k) The amplitude value of the target frequency domain signal is normalized to obtain the target frequency domain signal.
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine, according to the multi-channel signal, an initial ITD parameter of the current frame; and smoothing the initial ITD parameter of the current frame according to the ITD parameter of the previous frame or the previous subframe of the current frame to obtain the ITD parameter of the current frame.
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine according to Tsm=w1*Tsm [-1]+w2*T1Determining ITD parameters of the current frame, wherein T1An initial ITD parameter, T, representing the current framesmAn ITD parameter, T, representing the current framesm [-1]An ITD parameter, w, representing a previous frame or a previous subframe of said current frame1And w2Represents a smoothing factor, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine, according to the multi-channel signal, initial ITD parameters of K subframes of the current frame, where K is an integer greater than 1; according to the ITD parameter of the previous subframe of each subframe in the K subframes, carrying out smoothing treatment on the initial ITD parameter of each subframe to obtain the ITD parameter of each subframe; determining the ITD parameters of the K sub-frames as the ITD parameters of the current frame.
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine according to Tsm(j)=w1*Tsm(j-1)+w2Determining an ITD parameter of each sub-frame, wherein T (j) represents an initial ITD parameter of a j sub-frame of the K sub-frames, T (j)sm(j) An ITD parameter, T, representing the j sub-framesm(j-1) an ITD parameter, w, for a j-1 th subframe of the K subframes1And w2Represents a smoothing factor, j is an integer, and j is greater than or equal to 1 and less than or equal to K, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
Optionally, as an embodiment, a value of the smoothing factor is determined based on a signal type of the current frame.
Optionally, as an embodiment, the first determining unit 1110 is specifically configured to determine an energy of the multi-channel signal; determining the current frame as a non-speech frame under the condition that the energy of the multi-channel signal is less than or equal to a preset energy threshold value; and determining the current frame as a speech frame if the energy of the multi-channel signal is greater than the energy threshold.
Optionally, as an embodiment, the encoder further includes: a third determining unit, configured to determine an initial ITD parameter of the current frame according to the multi-channel signal; the second determining unit 1120 is specifically configured to determine the initial ITD parameter of the current frame as the ITD parameter of the current frame; and adjusting the initial ITD parameter of the current frame to obtain the ITD parameter of the current frame.
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine the ITD parameter of the current frame according to a frame type of a previous frame or a previous N frame of the current frame and the initial ITD parameter of the current frame, where N is an integer greater than 1.
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine, when the frame type of the previous frame or the previous N frames of the current frame is a speech frame, the ITD parameter of the current frame according to the ITD parameter of the previous frame of the current frame and the initial ITD parameter of the current frame.
Optionally, as an embodiment, the second determining unit 1120 is specifically configured to determine, when the ITD parameter of the previous frame of the current frame is not a preset value and the initial ITD parameter of the current frame is a preset value, the ITD parameter of the previous frame of the current frame as the ITD parameter of the current frame; otherwise, the initial ITD parameters of the current frame may be determined as the ITD parameters of the current frame.
Fig. 12 is a schematic configuration diagram of an encoder of the embodiment of the present invention. The encoder 1200 of fig. 12 includes:
a memory 1210 for storing programs;
a processor 1220 for executing a program in the memory 1210, wherein when the program is executed, the processor 1220 acquires a current frame including a multi-channel signal; determining feature information according to the multi-channel signal, wherein the feature information comprises at least one of a frame type and a signal type of the current frame, the frame type comprises a speech frame and/or a non-speech frame, and the signal type comprises an unvoiced sound and/or a voiced sound; determining the inter-channel time difference ITD parameter of the current frame according to the characteristic information; encoding the ITD parameters.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine a frame type of the current frame according to the multi-channel signal; under the condition that the current frame is a non-voice frame, determining an ITD parameter of the current frame by adopting a first ITD parameter extraction mode; and under the condition that the current frame is a voice frame, determining the ITD parameter of the current frame by adopting a second ITD parameter extraction mode.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine an ITD parameter of a previous frame or a previous subframe of the current frame as the ITD parameter of the current frame.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine the ITD parameter of the current frame according to the multi-channel signal.
The processor 1220 is specifically configured to generate a target frequency domain signal according to the multi-channel signal; performing frequency-time transformation on the target frequency domain signal to obtain a target time domain signal; and determining the ITD parameter of the current frame according to the target time domain signal.
The processor 1220 is specifically configured to determine an amplitude of the target frequency domain signal according to the multi-channel signal; determining IPD parameters of the current frame multi-channel signal according to the multi-channel signal; and generating the target frequency domain signal according to the amplitude of the target frequency domain signal and the IPD parameter of the current frame multi-channel signal.
The processor 1220 is specifically configured to operate in accordance with
Figure BDA0000985628170000321
Determining the amplitude of the target frequency domain signal, wherein AM(k) Representing the amplitude, A, of the target frequency domain signal1(k) And A2(k) Respectively representing the amplitude of the frequency domain signals of any two sound channels in the multi-channel signals, k represents a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L represents the time-frequency transformation length adopted when the multi-channel signals are transformed from a time domain to a frequency domain.
The processor 1220 is specifically configured to operate in accordance with
Figure BDA0000985628170000322
Generating the target frequency domain signal, wherein AM(k) Representing the amplitude, X, of the target frequency domain signalM_real(k) Representing the real part, X, of the target frequency domain signalM_iamge(k) And (c) representing an imaginary part of the target frequency domain signal, IPD (k) representing the IPD parameter, k representing a frequency point, k being more than or equal to 0 and less than or equal to L/2, and L representing a time-frequency transformation length adopted when the multi-channel signal is transformed from a time domain to a frequency domain.
The processor 1220 is specifically configured to operate according to XM(k)=X1(k)*X* 2(k) Generating the target frequency domain signal, wherein XM(k) Representing the target frequency domain signal, X1(k) A frequency domain signal, X, representing a first channel of the multi-channel signal* 2(k) And representing the conjugate of the frequency domain signal of the second channel in the multi-channel signal, wherein k represents a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L represents the time-frequency transformation length adopted when the multi-channel signal is transformed from the time domain to the frequency domain.
The processor 1220 is specifically configured to operate according to XM(k)=X1(k)*X* 2(k) Determining the frequency domain signal XM(k) Wherein X is1(k) A frequency domain signal, X, representing a first channel of the multi-channel signal* 2(k) Representing a second channel in the multi-channel signalK represents a frequency point, k is more than or equal to 0 and less than or equal to L/2, and L represents a time-frequency transformation length adopted when the multi-channel signal is transformed from a time domain to a frequency domain; for the frequency domain signal XM(k) The amplitude value of the target frequency domain signal is normalized to obtain the target frequency domain signal.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine, according to the multi-channel signal, an initial ITD parameter of the current frame; and smoothing the initial ITD parameter of the current frame according to the ITD parameter of the previous frame or the previous subframe of the current frame to obtain the ITD parameter of the current frame.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine according to Tsm=w1*Tsm [-1]+w2*T1Determining ITD parameters of the current frame, wherein T1An initial ITD parameter, T, representing the current framesmAn ITD parameter, T, representing the current framesm [-1]An ITD parameter, w, representing a previous frame or a previous subframe of said current frame1And w2Represents a smoothing factor, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
Optionally, as an embodiment, the processor 1220 is specifically configured to determine, according to the multi-channel signal, initial ITD parameters of K subframes of the current frame, where K is an integer greater than 1; according to the ITD parameter of the previous subframe of each subframe in the K subframes, carrying out smoothing treatment on the initial ITD parameter of each subframe to obtain the ITD parameter of each subframe; determining the ITD parameters of the K sub-frames as the ITD parameters of the current frame.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine according to Tsm(j)=w1*Tsm(j-1)+w2Determining an ITD parameter of each sub-frame, wherein T (j) represents an initial ITD parameter of a j sub-frame of the K sub-frames, T (j)sm(j) An ITD parameter, T, representing the j sub-framesm(j-1) represents the ITD of j-1 subframe of the K subframesParameter, w1And w2Represents a smoothing factor, j is an integer, and j is greater than or equal to 1 and less than or equal to K, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
Optionally, as an embodiment, a value of the smoothing factor is determined based on a signal type of the current frame.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine an energy of the multi-channel signal; determining the current frame as a non-speech frame under the condition that the energy of the multi-channel signal is less than or equal to a preset energy threshold value; and determining the current frame as a speech frame if the energy of the multi-channel signal is greater than the energy threshold.
Optionally, as an embodiment, the processor 1220 is further configured to determine an initial ITD parameter of the current frame according to the multi-channel signal; the processor 1220 is specifically configured to determine an initial ITD parameter of the current frame as an ITD parameter of the current frame; and adjusting the initial ITD parameter of the current frame to obtain the ITD parameter of the current frame.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine the ITD parameter of the current frame according to a frame type of a previous frame or a previous N frame of the current frame and an initial ITD parameter of the current frame, where N is an integer greater than 1.
Optionally, as an embodiment, the processor 1220 is specifically configured to, when the frame type of the previous frame or the previous N frames of the current frame is a speech frame, determine the ITD parameter of the current frame according to the ITD parameter of the previous frame of the current frame and the initial ITD parameter of the current frame.
Optionally, as an embodiment, the processor 1220 is specifically configured to determine, when the ITD parameter of the previous frame of the current frame is not a preset value and the initial ITD parameter of the current frame is a preset value, the ITD parameter of the previous frame of the current frame as the ITD parameter of the current frame; otherwise, the initial ITD parameters of the current frame may be determined as the ITD parameters of the current frame.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (26)

1. A method of encoding a multi-channel signal, comprising:
acquiring a current frame containing a multi-channel signal;
determining feature information according to the multi-channel signal, wherein the feature information comprises at least one of a frame type and a signal type of the current frame, the frame type comprises a speech frame and/or a non-speech frame, and the signal type comprises an unvoiced sound and/or a voiced sound;
determining the inter-channel time difference ITD parameter of the current frame according to the characteristic information;
encoding the ITD parameters.
2. The method of claim 1, wherein the determining feature information from the multi-channel signal comprises:
determining the frame type of the current frame according to the multi-channel signal;
the determining the ITD parameter of the current frame according to the characteristic information comprises:
under the condition that the current frame is a non-voice frame, determining an ITD parameter of the current frame by adopting a first ITD parameter extraction mode;
and under the condition that the current frame is a voice frame, determining the ITD parameter of the current frame by adopting a second ITD parameter extraction mode.
3. The method of claim 2, wherein said determining the ITD parameters of the current frame using the first ITD parameter extraction comprises:
determining the ITD parameter of the previous frame or the previous subframe of the current frame as the ITD parameter of the current frame.
4. The method according to claim 2 or 3, wherein said determining the ITD parameters of the current frame by using the second ITD parameter extraction method comprises:
and determining the ITD parameter of the current frame according to the multi-channel signal.
5. The method of claim 4, wherein the determining the ITD parameters of the current frame from the multi-channel signal comprises:
determining an initial ITD parameter of the current frame according to the multi-channel signal;
and smoothing the initial ITD parameter of the current frame according to the ITD parameter of the previous frame or the previous subframe of the current frame to obtain the ITD parameter of the current frame.
6. The method according to claim 5, wherein the smoothing the initial ITD parameter of the current frame according to the ITD parameter of the previous frame or the previous subframe of the current frame to obtain the ITD parameter of the current frame comprises:
according to Tsm=w1*Tsm [-1]+w2*T1Determining ITD parameters of the current frame, wherein T1An initial ITD parameter, T, representing the current framesmAn ITD parameter, T, representing the current framesm [-1]An ITD parameter, w, representing a previous frame or a previous subframe of said current frame1And w2Represents a smoothing factor, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
7. The method of claim 4, wherein the determining the ITD parameters of the current frame from the multi-channel signal comprises:
determining initial ITD parameters of K sub-frames of the current frame according to the multi-channel signal, wherein K is an integer larger than 1;
according to the ITD parameter of the previous subframe of each subframe in the K subframes, carrying out smoothing treatment on the initial ITD parameter of each subframe to obtain the ITD parameter of each subframe;
determining the ITD parameters of the K sub-frames as the ITD parameters of the current frame.
8. The method according to claim 7, wherein the smoothing the initial ITD parameter of each subframe according to the ITD parameter of the previous subframe of each subframe of the K subframes to obtain the ITD parameter of each subframe comprises:
according to Tsm(j)=w1*Tsm(j-1)+w2Determining an ITD parameter of each sub-frame, wherein T (j) represents an initial ITD parameter of a j sub-frame of the K sub-frames, T (j)sm(j) An ITD parameter, T, representing the j sub-framesm(j-1) an ITD parameter, w, for a j-1 th subframe of the K subframes1And w2Represents a smoothing factor, j is an integer, and j is greater than or equal to 1 and less than or equal to K, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
9. The method of claim 6 or 8, wherein a value of the smoothing factor is determined based on a signal type of the current frame.
10. The method of claim 2, wherein the method further comprises:
determining an initial ITD parameter of the current frame according to the multi-channel signal;
the determining the ITD parameter of the current frame by adopting the first ITD parameter extraction mode comprises the following steps:
determining the initial ITD parameter of the current frame as the ITD parameter of the current frame;
the determining the ITD parameter of the current frame by using the second ITD parameter extraction method includes:
and adjusting the initial ITD parameter of the current frame to obtain the ITD parameter of the current frame.
11. The method of claim 10, wherein the adjusting initial ITD parameters of the current frame to obtain ITD parameters of the current frame comprises:
and determining the ITD parameter of the current frame according to the frame type of the previous frame or the previous N frame of the current frame and the initial ITD parameter of the current frame, wherein N is an integer greater than 1.
12. The method of claim 11, wherein the determining the ITD parameters of the current frame according to the frame type of the previous frame or the previous N frames of the current frame and the initial ITD parameters of the current frame comprises:
and under the condition that the frame type of the previous frame or the previous N frame of the current frame is a speech frame, determining the ITD parameter of the current frame according to the ITD parameter of the previous frame of the current frame and the initial ITD parameter of the current frame.
13. The method of claim 12, wherein the determining the ITD parameters of the current frame based on the ITD parameters of the previous frame of the current frame and the initial ITD parameters of the current frame comprises:
determining the ITD parameter of the previous frame of the current frame as the ITD parameter of the current frame under the condition that the ITD parameter of the previous frame of the current frame is not a preset value and the initial ITD parameter of the current frame is a preset value; otherwise, determining the initial ITD parameter of the current frame as the ITD parameter of the current frame.
14. An encoder, comprising:
an acquisition unit configured to acquire a current frame including a multi-channel signal;
a first determining unit, configured to determine feature information according to the multi-channel signal, wherein the feature information includes at least one of a frame type and a signal type of the current frame, the frame type includes a speech frame and/or a non-speech frame, and the signal type includes an unvoiced sound and/or a voiced sound;
a second determining unit, configured to determine, according to the feature information, an inter-channel time difference ITD parameter of the current frame;
and the coding unit is used for coding the ITD parameters.
15. The encoder of claim 14, wherein the first determining unit is specifically configured to determine a frame type of the current frame based on the multi-channel signal; the second determining unit is specifically configured to determine the ITD parameter of the current frame in a first ITD parameter extraction manner when the current frame is a non-speech frame; and under the condition that the current frame is a voice frame, determining the ITD parameter of the current frame by adopting a second ITD parameter extraction mode.
16. The encoder of claim 15, wherein the second determining unit is specifically configured to determine ITD parameters of a previous frame or a previous subframe of the current frame as the ITD parameters of the current frame.
17. The encoder according to claim 15 or 16, wherein the second determining unit is specifically configured to determine the ITD parameters of the current frame based on the multi-channel signal.
18. The encoder of claim 17, wherein the second determining unit is specifically configured to determine initial ITD parameters of the current frame based on the multi-channel signal; and smoothing the initial ITD parameter of the current frame according to the ITD parameter of the previous frame or the previous subframe of the current frame to obtain the ITD parameter of the current frame.
19. The encoder as claimed in claim 18, wherein the second determination unit is specifically adapted to determine the second signal based on Tsm=w1*Tsm [-1]+w2*T1Determining ITD parameters of the current frame, wherein T1An initial ITD parameter, T, representing the current framesmAn ITD parameter, T, representing the current framesm [-1]An ITD parameter, w, representing a previous frame or a previous subframe of said current frame1And w2Represents a smoothing factor, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
20. The encoder of claim 17, wherein the second determining unit is specifically configured to determine initial ITD parameters for K subframes of the current frame based on the multi-channel signal, K being an integer greater than 1; according to the ITD parameter of the previous subframe of each subframe in the K subframes, carrying out smoothing treatment on the initial ITD parameter of each subframe to obtain the ITD parameter of each subframe; determining the ITD parameters of the K sub-frames as the ITD parameters of the current frame.
21. As claimed in claimThe encoder of 20, wherein the second determining unit is specifically configured to determine the second signal according to Tsm(j)=w1*Tsm(j-1)+w2Determining an ITD parameter of each sub-frame, wherein T (j) represents an initial ITD parameter of a j sub-frame of the K sub-frames, T (j)sm(j) An ITD parameter, T, representing the j sub-framesm(j-1) an ITD parameter, w, for a j-1 th subframe of the K subframes1And w2Represents a smoothing factor, j is an integer, and j is greater than or equal to 1 and less than or equal to K, wherein w1And w2All values of (1) are [0,1 ]]And w is1+w2=1。
22. The encoder of claim 19 or 21, wherein the value of the smoothing factor is determined based on a signal type of the current frame.
23. The encoder of claim 15, wherein the encoder further comprises:
a third determining unit, configured to determine an initial ITD parameter of the current frame according to the multi-channel signal;
the second determining unit is specifically configured to determine the initial ITD parameter of the current frame as the ITD parameter of the current frame; and adjusting the initial ITD parameter of the current frame to obtain the ITD parameter of the current frame.
24. The encoder of claim 23, wherein the second determining unit is specifically configured to determine the ITD parameter of the current frame according to a frame type of a previous frame or a previous N frame of the current frame and an initial ITD parameter of the current frame, wherein N is an integer greater than 1.
25. The encoder of claim 24, wherein the second determining unit is specifically configured to determine the ITD parameter of the current frame according to the ITD parameter of the previous frame of the current frame and the initial ITD parameter of the current frame, when the frame type of the previous frame or the previous N frames of the current frame is a speech frame.
26. The encoder of claim 25, wherein the second determining unit is specifically configured to determine the ITD parameter of the previous frame of the current frame as the ITD parameter of the current frame if the ITD parameter of the previous frame of the current frame is not a preset value and the initial ITD parameter of the current frame is a preset value; otherwise, determining the initial ITD parameter of the current frame as the ITD parameter of the current frame.
CN201610303992.4A 2016-05-10 2016-05-10 Coding method and coder for multi-channel signal Active CN107358959B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610303992.4A CN107358959B (en) 2016-05-10 2016-05-10 Coding method and coder for multi-channel signal
PCT/CN2016/103596 WO2017193551A1 (en) 2016-05-10 2016-10-27 Method for encoding multi-channel signal and encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610303992.4A CN107358959B (en) 2016-05-10 2016-05-10 Coding method and coder for multi-channel signal

Publications (2)

Publication Number Publication Date
CN107358959A CN107358959A (en) 2017-11-17
CN107358959B true CN107358959B (en) 2021-10-26

Family

ID=60266105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610303992.4A Active CN107358959B (en) 2016-05-10 2016-05-10 Coding method and coder for multi-channel signal

Country Status (2)

Country Link
CN (1) CN107358959B (en)
WO (1) WO2017193551A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859749A (en) * 2017-11-30 2019-06-07 阿里巴巴集团控股有限公司 A kind of voice signal recognition methods and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1428953A (en) * 2002-04-22 2003-07-09 西安大唐电信有限公司 Implement method of multi-channel AMR vocoder and its equipment
CN101517637A (en) * 2006-09-18 2009-08-26 皇家飞利浦电子股份有限公司 Encoding and decoding of audio objects
CN102216983A (en) * 2008-11-19 2011-10-12 摩托罗拉移动公司 Apparatus and method for encoding at least one parameter associated with a signal source
CN103295577A (en) * 2013-05-27 2013-09-11 深圳广晟信源技术有限公司 Analysis window switching method and device for audio signal coding
CN103339670A (en) * 2011-02-03 2013-10-02 瑞典爱立信有限公司 Determining the inter-channel time difference of a multi-channel audio signal

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4887288B2 (en) * 2005-03-25 2012-02-29 パナソニック株式会社 Speech coding apparatus and speech coding method
GB2470059A (en) * 2009-05-08 2010-11-10 Nokia Corp Multi-channel audio processing using an inter-channel prediction model to form an inter-channel parameter
JP5753540B2 (en) * 2010-11-17 2015-07-22 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Stereo signal encoding device, stereo signal decoding device, stereo signal encoding method, and stereo signal decoding method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1428953A (en) * 2002-04-22 2003-07-09 西安大唐电信有限公司 Implement method of multi-channel AMR vocoder and its equipment
CN101517637A (en) * 2006-09-18 2009-08-26 皇家飞利浦电子股份有限公司 Encoding and decoding of audio objects
CN102216983A (en) * 2008-11-19 2011-10-12 摩托罗拉移动公司 Apparatus and method for encoding at least one parameter associated with a signal source
CN103339670A (en) * 2011-02-03 2013-10-02 瑞典爱立信有限公司 Determining the inter-channel time difference of a multi-channel audio signal
CN103295577A (en) * 2013-05-27 2013-09-11 深圳广晟信源技术有限公司 Analysis window switching method and device for audio signal coding

Also Published As

Publication number Publication date
WO2017193551A1 (en) 2017-11-16
CN107358959A (en) 2017-11-17

Similar Documents

Publication Publication Date Title
JP7443423B2 (en) Multichannel signal encoding method and encoder
JP6641018B2 (en) Apparatus and method for estimating time difference between channels
JP7273080B2 (en) Method and encoder for encoding multi-channel signals
US8831958B2 (en) Method and an apparatus for a bandwidth extension using different schemes
JP5680755B2 (en) System, method, apparatus and computer readable medium for noise injection
EP3776541B1 (en) Apparatus, method or computer program for estimating an inter-channel time difference
CN108885876B (en) Optimized encoding and decoding of spatialization information for parametric encoding and decoding of a multi-channel audio signal
EP3457402B1 (en) Noise-adaptive voice signal processing method and terminal device employing said method
US11790922B2 (en) Apparatus for encoding or decoding an encoded multichannel signal using a filling signal generated by a broad band filter
JP2015517121A (en) Inter-channel difference estimation method and spatial audio encoding device
CN107358959B (en) Coding method and coder for multi-channel signal
CN107358961B (en) Coding method and coder for multi-channel signal
CN107358960B (en) Coding method and coder for multi-channel signal
US20240185865A1 (en) Method and device for multi-channel comfort noise injection in a decoded sound signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant