Kawamura et al., 2003 - Google Patents
A noise reduction method based on linear prediction analysisKawamura et al., 2003
- Document ID
- 16217742598759977370
- Author
- Kawamura A
- Fujii K
- Itoh Y
- Fukui Y
- Publication year
- Publication venue
- Electronics and Communications in Japan (Part III: Fundamental Electronic Science)
External Links
Snippet
The linear prediction method works in such a way that the difference (prediction residual) between the objective signal and the output generated by making the objective signal a linear combination of the input signals is uncorrelated. Hence, in this linear prediction …
- 238000004458 analytical method 0 title description 9
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02166—Microphone arrays; Beamforming
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0232—Processing in the frequency domain
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/27—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the analysis technique
- G10L25/30—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the analysis technique using neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/93—Discriminating between voiced and unvoiced parts of speech signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00-G10L21/00 characterised by the type of extracted parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signal, using source filter models or psychoacoustic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K11/00—Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
- G10K11/16—Methods or devices for protecting against, or damping of, acoustic waves, e.g. sound
- G10K11/175—Methods or devices for protecting against, or damping of, acoustic waves, e.g. sound using interference effects; Masking sound
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109065067B (en) | Conference terminal voice noise reduction method based on neural network model | |
Kuklasiński et al. | Maximum likelihood PSD estimation for speech enhancement in reverberation and noise | |
Habets | Speech dereverberation using statistical reverberation models | |
Schwartz et al. | An expectation-maximization algorithm for multimicrophone speech dereverberation and noise reduction with coherence matrix estimation | |
Habets et al. | Joint dereverberation and residual echo suppression of speech signals in noisy environments | |
Taha et al. | A survey on techniques for enhancing speech | |
Yen et al. | Adaptive co-channel speech separation and recognition | |
Jaiswal et al. | Implicit wiener filtering for speech enhancement in non-stationary noise | |
Kawamura et al. | A noise reduction method based on linear prediction analysis | |
Cao et al. | Multichannel speech separation by eigendecomposition and its application to co-talker interference removal | |
Kawamura et al. | A new noise reduction method using estimated noise spectrum | |
Kawamura et al. | A new noise reduction method using linear prediction error filter and adaptive digital filter | |
Yadav et al. | Joint Dereverberation and Beamforming With Blind Estimation of the Shape Parameter of the Desired Source Prior | |
Prasad et al. | Two microphone technique to improve the speech intelligibility under noisy environment | |
Acero et al. | Towards environment-independent spoken language systems | |
KR101537653B1 (en) | Method and system for noise reduction based on spectral and temporal correlations | |
Giri et al. | A novel target speaker dependent postfiltering approach for multichannel speech enhancement | |
Abutalebi et al. | Speech dereverberation in noisy environments using an adaptive minimum mean square error estimator | |
Paul | A robust vocoder with pitch-adaptive spectral envelope estimation and an integrated maximum-likelihood pitch estimator | |
Hidri et al. | A multichannel beamforming-based framework for speech extraction | |
Rustrana et al. | Spectral Methods for Single Channel Speech Enhancement in Multi-Source Environment | |
Prasad | Speech enhancement for multi microphone using kepstrum approach | |
Lebart et al. | A new method based on spectral subtraction for the suppression of late reverberation from speech signals | |
Sasaoka et al. | A noise reduction system for wideband and sinusoidal noise based on adaptive line enhancer and inverse filter | |
Akagi et al. | Speech enhancement and segregation based on human auditory mechanisms |