Nothing Special   »   [go: up one dir, main page]

Wang et al., 2016 - Google Patents

Over-determined source separation and localization using distributed microphones

Wang et al., 2016

View PDF
Document ID
546799491858924491
Author
Wang L
Reiss J
Cavallaro A
Publication year
Publication venue
IEEE/ACM Transactions on Audio, Speech, and Language Processing

External Links

Snippet

We propose an overdetermined source separation and localization method for a set of M microphones distributed around an unknown number, N<; M, of sources. We reformulate the overdetermined acoustic mixing procedure with a new determined mixing model and apply …
Continue reading at ieeexplore.ieee.org (PDF) (other versions)

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02166Microphone arrays; Beamforming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers

Similar Documents

Publication Publication Date Title
Wang et al. Over-determined source separation and localization using distributed microphones
Nguyen et al. Robust source counting and DOA estimation using spatial pseudo-spectrum and convolutional neural network
Li et al. Multiple-speaker localization based on direct-path features and likelihood maximization with spatial sparsity regularization
Li et al. Estimation of the direct-path relative transfer function for supervised sound-source localization
Zohourian et al. Binaural speaker localization integrated into an adaptive beamformer for hearing aids
Wang et al. An iterative approach to source counting and localization using two distant microphones
Taseska et al. Informed spatial filtering for sound extraction using distributed microphone arrays
EP3501026B1 (en) Blind source separation using similarity measure
Li et al. Reverberant sound localization with a robot head based on direct-path relative transfer function
Khan et al. Video-aided model-based source separation in real reverberant rooms
Wang et al. Pseudo-determined blind source separation for ad-hoc microphone networks
Wang et al. Spatially informed independent vector analysis for source extraction based on the convolutive transfer function model
Pertilä Online blind speech separation using multiple acoustic speaker tracking and time–frequency masking
Zheng et al. BSS for improved interference estimation for blind speech signal extraction with two microphones
Li et al. Local relative transfer function for sound source localization
Pasha et al. Blind speaker counting in highly reverberant environments by clustering coherence features
Meier et al. Analysis of the performance and limitations of ICA-based relative impulse response identification
Zohny et al. Modelling interaural level and phase cues with Student's t-distribution for robust clustering in MESSL
Drude et al. DOA-estimation based on a complex Watson kernel method
Schwartz et al. Array configuration mismatch in deep DOA estimation: Towards robust training
Yang et al. Independent vector analysis assisted adaptive beamfomring for speech source separation with an acoustic vector sensor
Dang et al. Multiple sound source localization based on a multi-dimensional assignment model
Firoozabadi et al. Combination of nested microphone array and subband processing for multiple simultaneous speaker localization
Gburrek et al. On source-microphone distance estimation using convolutional recurrent neural networks
Zohny et al. Variational EM for clustering interaural phase cues in MESSL for blind source separation of speech