Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Deep neural network techniques for monaural speech enhancement and separation: state of the art analysis

Published: 25 October 2023 Publication History

Abstract

Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in tasks such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement and separation to perform speech denoising, dereverberation, speaker extraction and speaker separation. In this paper, we review the current DNN techniques being employed to achieve speech enhancement and separation. The review looks at the whole pipeline of speech enhancement and separation techniques from feature extraction, how DNN-based tools models both global and local features of speech, model training (supervised and unsupervised) to how they address label ambiguity problem. The review also covers the use of domain adaptation techniques and pre-trained models to boost speech enhancement process. By this, we hope to provide an all inclusive reference of all the state of art DNN based techniques being applied in the domain of speech separation and enhancement. We further discuss future research directions. This survey can be used by both academic researchers and industry practitioners working in speech separation and enhancement domain.

References

[1]
Adiga N, Pantazis Y, Tsiaras V, Stylianou Y (2019) Speech enhancement for noise-robust speech synthesis using wasserstein gan. In: INTERSPEECH, pp 1821–1825
[2]
Aihara R, Hanazawa T, Okato Y, Wichern G, Roux JL (2019) Teacher-student deep clustering for low-delay single channel speech separation. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2019-May, pp 690–694
[3]
Ai Y, Li H, Wang X, Yamagishi J, Ling Z (2021) Denoising-and-dereverberation hierarchical neural vocoder for robust waveform generation. In: 2021 IEEE spoken language technology workshop, SLT 2021—proceedings, pp 477–484
[4]
Allen JB (1982) Applications of the short time Fourier transform to speech processing and spectral analysis. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 1982-May, pp 1012–1015
[5]
Allen JB and Rabiner LR A unified approach to short-time fourier analysis and synthesis Proc IEEE 1977 65 11 1558-1564
[6]
Arweiler I and Buchholz JM The influence of spectral characteristics of early reflections on speech intelligibility J Acoust Soc Am 2011 130 2 996-1005
[7]
Avery KR, Pan J, Engler-Pinto CC, Wei Z, Yang F, Lin S, Luo L, and Konson D Fatigue behavior of stainless steel sheet specimens at extremely high temperatures SAE Int J Mater Manuf 2014 7 3 560-566
[8]
Baby D, Virtanen T, Barker T, Van Hamme H (2014) Coupled dictionary training for exemplar-based speech enhancement. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, pp 2883–2887
[9]
Baevski A, Zhou H, Mohamed A, and Auli M wav2vec 2.0: a framework for self-supervised learning of speech representations Adv Neural Inf Process Syst 2020 1 1-19
[10]
Bahmaninezhad F, Wu J, Gu R, Zhang SX, Xu Y, Yu M, Yu D (2019) A comprehensive study of speech separation: spectrogram vs waveform separation. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2019-September, pp 4574–4578
[11]
Bai S, Kolter JZ, Koltun V (2018) An empirical evaluation of generic convolutional and recurrent networks for sequence modeling, arXiv preprint arXiv:1803.01271
[12]
Bando Y, Mimura M, Itoyama K, Yoshii K, Kawahara T (2018) Statistical speech enhancement based on probabilistic integration of variational autoencoder and non-negative matrix factorization, ICASSP, In: IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2018-April, no. Mcmc, pp 716–720
[13]
Beltagy I, Peters ME, Cohan A (2020) Longformer: the long-document transformer, arXiv preprint http://arxiv.org/abs/2004.05150
[14]
Bie X, Leglaive S, Alameda-Pineda X, and Girin L Unsupervised speech enhancement using dynamical variational autoencoders IEEE/ACM Trans Audio Speech Lang Process 2022 30 2993-3007
[15]
Brungart DS, Chang PS, Simpson BD, Wang D (2006) Isolating the energetic component of speech-on-speech masking with ideal time-frequency segregation, pp 4007–4018
[16]
Byun J and Shin JW Monaural speech separation using speaker embedding from preliminary separation IEEE/ACM Trans Audio Speech Lang Process 2021 29 2753-2763
[17]
Cao R, Abdulatif S, Yang B (2022) CMGAN: conformer-based metric GAN for speech enhancement, arXiv preprint arXiv:2209.11112, pp 936–940
[18]
Chandna P, Miron M, Janer J, Gómez E (2017) Monoaural audio source separation using deep convolutional neural networks, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 10169 LNCS, pp 258–266
[19]
Chang X, Zhang W, Qian Y, Le Roux J, Watanabe S (2020) End-to-end multi-speaker speech recognition with transformer. In: ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2020, pp 6134–6138
[20]
Chen Z, Watanabe S, Erdogan H, Hershey JR (2015) Speech enhancement and recognition using multi-task learning of long short-term memory recurrent neural networks. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2015-January, 2015, pp 3274–3278
[21]
Chen Z, Luo Y, Mesgarani N (2017) Deep attractor network for single-microphone speaker separation. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 246–250. IEEE
[22]
Chen J, Mao Q, Liu D (2020) Dual-path transformer network: direct context-aware modeling for end-to-end monaural speech separation. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, pp 2642–2646
[23]
Chen S, Wu Y, Chen Z, Wu J, Yoshioka T, Liu S, Li J, Yu X (2021) Ultra fast speech separation model with teacher student learning. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 3, pp 2298–2302
[24]
Chen L-W, Cheng Y-F, Lee H-S, Tsao Y, Wang H-M (2023a) A training and inference strategy using noisy and enhanced speech as target for speech enhancement without clean speech. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, pp 5315–5319
[25]
Chen L, Mo Z, Ren J, Cui C, and Zhao Q An electroglottograph auxiliary neural network for target speaker extraction Appl Sci 2023 13 1 469
[26]
Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: EMNLP 2014-2014 conference on empirical methods in natural language processing, proceedings of the conference, pp 1724–1734
[27]
Choi H-S, Heo H, Lee JH, Lee K (2020) Phase-aware single-stage speech denoising and dereverberation with U-Net. arXiv preprint arXiv:2006.00687
[28]
Chung YA, Hsu WN, Tang H, Glass J (2019) An unsupervised autoregressive model for speech representation learning. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2019-September, pp 146–150
[29]
Chung YA, Tang H, Glass J (2020) Vector-quantized autoregressive predictive coding. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, no. 1, pp 3760–3764
[30]
Cord-Landwehr T, Boeddeker C, von Neumann T, Zorila C, Doddipatla R, Haeb-Umbach R (2021) Monaural source separation: from anechoic to reverberant environments. In: 2022 international workshop on acoustic signal enhancement (IWAENC), pp 1–5. arXiv:org/abs/2111.07578
[31]
de Oliveira D, Peer T, Gerkmann T (2022) Efficient transformer-based speech enhancement using long frames and STFT magnitudes, arXiv preprint arXiv:2206.11703., no. 1, pp 2948–2952
[32]
Défossez A, Synnaeve G, Adi Y (2020) Real time speech enhancement in the waveform domain. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, pp 3291–3295
[33]
Delcroix M, Zmolikova K, Kinoshita K, Ogawa A, Nakatani T (2018) Single channel target speaker extraction and recognition with speaker beam. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 5554–5558. IEEE
[34]
Donahue C, Li B, Prabhavalkar R (2018) Exploring speech enhancement with generative adversarial networks for robust speech recognition. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2018-April, no. Figure 1, pp 5024–5028
[35]
Dovrat S, Nachmani E, Wolf L (2021) Many-speakers single channel speech separation with optimal permutation training. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 4, pp 2408–2412
[36]
Du J, Huo Q (2008) A speech enhancement approach using piecewise linear approximation of an explicit model of environmental distortions. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, pp 569–572
[37]
Du J, Tu Y, Xu Y, Dai L, Lee CH (2014) Speech separation of a target speaker based on deep neural networks. In: International conference on signal processing proceedings, ICSP, vol 2015-January, no. October, pp 473–477
[38]
Du Z, Zhang X, and Han J A joint framework of denoising autoencoder and generative vocoder for monaural speech enhancement IEEE/ACM Trans Audio Speech Lang Process 2020 28 1493-1505
[39]
Dupuis E, Novo D, O’Connor I, Bosio A (2020) Sensitivity analysis and compression opportunities in DNNs using weight sharing. In: Proceedings—2020 23rd international symposium on design and diagnostics of electronic circuits and systems, DDECS 2020
[40]
Ephraim Y and Malah D Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator IEEE Trans Acoust Speech Signal Process 1984 32 6 1109-1121
[41]
Erdogan H, Hershey JR, Watanabe S, Le Roux J (2015) Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2015-August, pp 708–712
[42]
Erhan D, Courville A, Bengio Y, Vincent P (2010) Why does unsupervised pre-training help deep learning? In: Proceedings of the thirteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, pp 201–208
[43]
Fan C, Tao J, Liu B, Yi J, Wen Z, and Liu X End-to-end post-filter for speech separation with deep attention fusion features IEEE/ACM Trans Audio Speech Lang Process 2020 28 1303-1314
[44]
Fedorov I, Stamenovic M, Jensen C, Yang LC, Mandell A, Gan Y, Mattina M, Whatmough PN (2020) TinyLSTMs: efficient neural speech enhancement for hearing aids. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, pp 4054–4058
[45]
Friedman DH (1985) Instantaneous-frequency distribution vs. time: an interpretation of the phase structure of speech. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, pp 1121–1124
[46]
Fu SW, Tsao Y, Lu X (2016) SNR-aware convolutional neural network modeling for speech enhancement. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 08-12-Sept, pp 3768–3772
[47]
Fu SW, Hu TY, Tsao Y, Lu X (2017) Complex spectrogram enhancement by convolutional neural network with multi-metrics learning. In: IEEE international workshop on machine learning for signal processing, MLSP, vol 2017-September, pp 1–6
[48]
Fu SW, Wang TW, Tsao Y, Lu X, Kawai H, Stoller D, Ewert S, Dixon S, Lu X, Tsao Y, Matsuda S, Hori C, Xu Y, Du J, Dai LR, Lee CH, Gao T, Du J, Dai LR, Lee CH, Fu SW, Tsao Y, Lu X, Weninger F, Hershey JR, Le Roux J, Schuller B, Xu Y, Du J, Dai LR, Lee CH, Lluís F, Pons J, Serra X (2018a) Speech enhancement based on deep denoising autoencoder. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 08-12-Sept, no. 1, pp 7–19
[49]
Fu SW, Wang TW, Tsao Y, Lu X, and Kawai H End-to-end waveform utterance enhancement for direct evaluation metrics optimization by fully convolutional neural networks IEEE/ACM Trans Audio Speech Lang Process 2018 26 9 1570-1584
[50]
Fu SW, Liao CF, Tsao Y, Lin SD (2019) MetricGAN: generative adversarial networks based black-box metric scores optimization for speech enhancement. In: 36th international conference on machine learning, ICML 2019, vol 2019-June, pp 3566–3576
[51]
Fu S-W, Yu C, Hung K-H, Ravanelli M, Tsao Y (2022) Metricgan-u: unsupervised speech enhancement/dereverberation based only on noisy/reverberated speech. In: ICASSP 2022-2022 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 7412–7416. IEEE
[52]
Fujimura T, Koizumi Y, Yatabe K, Miyazaki R (2021) Noisy-target training: a training strategy for DNN-based speech enhancement without clean speech. In: 2021 29th european signal processing conference (EUSIPCO), pp 436–440. IEEE
[53]
Gamper H, Tashev IJ (2018) Blind reverberation time estimation using a convolutional neural network. In: 16th international workshop on acoustic signal enhancement, IWAENC 2018—proceedings, pp 136–140
[54]
Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, and Lempitsky V Domain-adversarial training of neural networks J Mach Learn Res 2016 17 1 2030-2096
[55]
Gannot S, Vincent E, Markovich-Golan S, and Ozerov A A consolidated perspective on multimicrophone speech enhancement and source separation IEEE/ACM Trans Audio Speech Lang Process 2017 25 4 692-730
[56]
Gao T, Du J, Dai LR, Lee CH (2016) SNR-based progressive learning of deep neural network for speech enhancement. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 08-12-Sept, pp 3713–3717
[57]
Garau G and Renals S Combining spectral representations for large-vocabulary continuous speech recognition IEEE Trans Audio Speech Lang Process 2008 16 3 508-518
[58]
Germain FG, Chen Q, Koltun V (2018) Speech denoising with deep feature losses, arXiv preprint arXiv:1806.10522
[59]
Gholami A, Kim S, Dong Z, Yao Z, Mahoney MW, Keutzer K (2022) A survey of quantization methods for efficient neural network inference, low-power computer vision, pp 291–326
[60]
Goodfellow I (2016) NIPS 2016 Tutorial: generative adversarial networks, arXiv preprint arXiv. arXiv:org/abs/1701.00160
[61]
Gou J, Yu B, Maybank SJ, and Tao D Knowledge distillation: a survey Int J Comput Vis 2021 129 1789-1819
[62]
Grais EM, Plumbley MD (2018) Single channel audio source separation using convolutional denoising autoencoders. In: 2017 IEEE global conference on signal and information processing, GlobalSIP 2017—proceedings, vol 2018-Janua, pp 1265–1269
[63]
Grais EM, Sen MU, Erdogan H (2014) Deep neural networks for single channel source separation. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, pp 3734–3738
[64]
Griffin DW and Lim JS Signal estimation from modified short-time fourier transform IEEE Trans Acoust Speech Signal Process 1984 32 2 236-243
[65]
Gulati A, Qin J, Chiu CC, Parmar N, Zhang Y, Yu J, Han W, Wang S, Zhang Z, Wu Y, Pang R (2020) Conformer: convolution-augmented transformer for speech recognition. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, pp 5036–5040
[66]
Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC (2017) Improved training of wasserstein gans. Adv Neural Inf Process Syst 30
[67]
Gunawan D and Sen D Iterative phase estimation for the synthesis of separated sources from single-channel mixtures IEEE Signal Process Lett 2010 17 5 421-424
[68]
Han K, Wang Y, Wang DL, Woods WS, Merks I, and Zhang T Learning spectral mapping for speech dereverberation and denoising IEEE Trans Audio Speech Lang Process 2015 23 6 982-992
[69]
Han C, O’Sullivan J, Luo Y, Herrero J, Mehta AD, and Mesgarani N Speaker-independent auditory attention decoding without access to clean speech sources Sci Adv 2019 5 5 1-12
[70]
Hao X, Xu C, and Xie L Neural speech enhancement with unsupervised pre-training and mixture training Neural Netw 2023 158 216-227
[71]
He Y and Zhao J Temporal convolutional networks for anomaly detection in time series J Phys 2019 1213 4
[72]
Heitkaemper J, Jakobeit D, Boeddeker C, Drude L, Haeb-Umbach R (2020) Demystifying TasNet: a dissecting approach. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2020-May, pp 6359–6363
[73]
Hershey JR, Chen Z, Le Roux J, Watanabe S (2016) Deep clustering: discriminative embeddings for segmentation and separation. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 31–35. IEEE
[74]
Hien TD, Tuan DV, At PV, and Son LH Novel algorithm for non-negative matrix factorization New Math Nat Comput 2015 11 2 121-133
[75]
Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network, arXiv preprint arXiv:1503.02531 2.7, pp 1–9
[76]
Hochreiter S and Schmidhuber J Long short-term memory Neural Comput 1997 9 8 1735-1780
[77]
Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) MobileNets: efficient convolutional neural networks for mobile vision applications, arXiv preprint arXiv:1704.04861
[78]
Hsu YT, Lin YC, Fu SW, Tsao Y, Kuo TW (2019) A study on speech enhancement using exponent-only floating point quantized neural network (EOFP-QNN). In: 2018 IEEE spoken language technology workshop, SLT 2018—proceedings, pp 566–573
[79]
Hsu WN, Bolte B, Tsai YHH, Lakhotia K, Salakhutdinov R, and Mohamed A HuBERT: self-supervised speech representation learning by masked prediction of hidden units IEEE/ACM Trans Audio Speech Lang Process 2021 29 3451-3460
[80]
Hu X, Li K, Zhang W, Luo Y, Lemercier JM, and Gerkmann T Speech separation using an asynchronous fully recurrent convolutional neural network Adv Neural Inf Process Syst 2021 27 22509-22522
[81]
Huang P-S, Kim M, Hasegawa-Johnson M, and Smaragdis P Deep learning for monaural speech separation Acta Phys Pol B 2011 42 1 33-44
[82]
Huang PS, Kim M, Hasegawa-Johnson M, and Smaragdis P Joint optimization of masks and deep recurrent neural networks for monaural source separation IEEE/ACM Trans Audio Speech Lang Process 2015 23 12 2136-2147
[83]
Huang Z, Watanabe S, Yang SW, García P, Khudanpur S (2022) Investigating self-supervised learning for speech enhancement and separation. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2022-May, pp 6837–6841
[84]
Hung K-H, Fu S-w, Tseng H-H, Chiang H-T, Tsao Y, Lin C-W, (2022) Boosting self-supervised embeddings for speech enhancement, arXiv preprint arXiv:2204.03339
[85]
Irvin B, Stamenovic M, Kegler M, Yang L-C (2023) Self-supervised learning for speech enhancement through synthesis. In: ICASSP 2023-2023 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 1–5. IEEE
[86]
Isik Y, Le Roux J, Chen Z, Watanabe S, Hershey JR (2016) Single-channel multi-speaker separation using deep clustering. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 08-12-Sept, pp 545–549
[87]
Isik U, Giri R, Phansalkar N, Valin JM, Helwani K, Krishnaswamy A (2020) PoCoNet: better speech enhancement with frequency-positional embeddings, semi-supervised conversational data, and biased loss. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, pp 2487–2491
[88]
Isola P, Zhu J-Y, Zhou T, Efros AA (2017) Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1125–1134
[89]
Jansson A, Humphrey E, Montecchio N, Bittner R, Kumar A, Weyde T (2017) Singing voice separation with deep U-Net convolutional networks. In: Proceedings of the 18th international society for music information retrieval conference, ISMIR 2017, pp 745–751
[90]
Ji X, Yu M, Zhang C, Su D, Yu T, Liu X, Yu D (2020) Speaker-aware target speaker enhancement by jointly learning with speaker embedding extraction. In: ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 7294–7298. IEEE
[91]
Jiang F and Duan Z Speaker attractor network: generalizing speech separation to unseen numbers of sources IEEE Signal Process Lett 2020 27 1859-1863
[92]
Jiang Y, Wang DL, Liu RS, and Feng ZM Binaural classification for reverberant speech segregation using deep neural networks IEEE/ACM Trans Audio Speech Lang Process 2014 22 12 2112-2121
[93]
Jin Z and Wang D A supervised learning approach to monaural segregation of reverberant speech IEEE Trans Audio Speech Lang Process 2009 17 4 625-638
[94]
Karamatlı E, Kırbız S (2022) Mixcycle: unsupervised speech separation via cyclic mixture permutation invariant training. IEEE Signal Process Lett
[95]
Kavalerov I, Wisdom S, Erdogan H, Patton B, Wilson K, Le Roux J, Hershey JR (2019) Universal sound separation. In: IEEE workshop on applications of signal processing to audio and acoustics, vol 2019-October, pp 175–179
[96]
Kim M, Smaragdis P (2015) Adaptive denoising autoencoders: a fine-tuning scheme to learn from test mixtures. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 9237, pp 100–107
[97]
Kingma DP, Welling M (2014) Auto-encoding variational bayes. In: 2nd international conference on learning representations, ICLR 2014—conference track proceedings, no. Ml, pp 1–14
[98]
Kinoshita K, Drude L, Delcroix M, Nakatani T (2018) Listening to each speaker one by one with recurrent selective hearing networks. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2018-April, pp 5064–5068
[99]
Kitaev N, Kaiser Ł, Levskaya A (2020) Reformer: the efficient transformer. In: International conference on learning representations, pp 1–12 arXiv:org/abs/2001.04451
[100]
Kjems U, Boldt JB, Pedersen MS, Lunner T, and Wang D Role of mask pattern in intelligibility of ideal binary-masked noisy speech J Acoust Soc Am 2009 126 3 1415-1426
[101]
Koizumi Y, Niwa K, Hioka Y, Kobayashi K, Haneda Y (2017) DNN-based source enhancement self-optimized by reinforcement learning using sound quality measurements. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, pp 81–85
[102]
Kolbæk M, Yu D, Tan Z-H, and Jensen J Multitalker speech separation with utterance-level permutation invariant training of deep recurrent neural networks IEEE/ACM Trans Audio Speech Lang Process 2017 25 10 1901-1913
[103]
Kolbæk M, Tan ZH, and Jensen J Speech intelligibility potential of general and specialized deep neural network based speech enhancement systems IEEE/ACM Trans Audio Speech Lang Process 2017 25 1 149-163
[104]
Kolbaek M, Tan Z-H, and Jensen J On the relationship between short-time objective intelligibility and short-time spectral-amplitude mean-square error for speech enhancement IEEE/ACM Trans Audio Speech Lang Process 2018 27 2 283-295
[105]
Kolbcek M, Tan ZH, Jensen J (2018b) Monaural speech enhancement using deep neural networks by maximizing a short-time objective intelligibility measure. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2018-April, pp 5059–5063
[106]
Kolbaek M, Tan ZH, Jensen SH, and Jensen J On loss functions for supervised monaural time-domain speech enhancement IEEE/ACM Trans Audio Speech Lang Process 2020 28 825-838
[107]
Kong J, Kim J, and Bae J Hifi-gan: generative adversarial networks for efficient and high fidelity speech synthesis Adv Neural Inf Process Syst 2020 33 17 022-17 033
[108]
Kong Z, Ping W, Dantrey A, Catanzaro B (2022) Speech denoising in the waveform domain with self-attention. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2022-May, pp 7867–7871
[109]
Kothapally V and Hansen JH Skipconvgan: monaural speech dereverberation using generative adversarial networks via complex time-frequency masking IEEE/ACM Trans Audio Speech Lang Process 2022 30 1600-1613
[110]
Kothapally V, Hansen JH (2022b) Complex-valued time-frequency self-attention for speech dereverberation, arXiv preprint arXiv:2211.12632
[111]
Kumar A, Florencio D (2016) Speech enhancement in multiple-noise conditions using deep neural networks. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 08-12-September-2016, pp 3738–3742
[112]
Lam MW, Wang J, Su D, Yu D (2021a) Effective low-cost time-domain audio separation using globally attentive locally recurrent networks. In: 2021 IEEE spoken language technology workshop, SLT 2021–proceedings, pp 801–808
[113]
Lam MW, Wang J, Su D, Yuy D (2021b) Sandglasset: a light multi-granularity self-attentive network for time-domain speech separation. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2021-June, pp 5759–5763
[114]
Le Roux J, Wichern G, Watanabe S, Sarroff A, and Hershey JR Phasebook and friends: leveraging discrete representations for source separation IEEE J Sel Top Sign Process 2019 13 2 370-382
[115]
Lea C, Flynn MD, Vidal R, Reiter A, Hager GD (2017) Temporal convolutional networks for action segmentation and detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 156–165
[116]
Lee Y-S, Wang C-Y, Wang S-F, Wang J-C, Wu C-H (2017) Fully complex deep neural network for phase-incorporating monaural source separation. In: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 281–285. IEEE
[117]
Lee JH, Chang JH, Yang JM, and Moon HG NAS-TasNet: neural architecture search for time-domain speech separation IEEE Access 2022 10 56 031-56 043
[118]
Leglaive S, Girin L, Horaud R (2018) A variance modeling framework based on variational autoencoders for speech enhancement. In: IEEE international workshop on machine learning for signal processing, MLSP, vol. 2018-September
[119]
Leglaive S, Simsekli U, Liutkus A, Girin L, Horaud R (2019) Speech enhancement with variational autoencoders and alpha-stable distributions. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2019-May, pp 541–545
[120]
Leglaive S, Alameda-Pineda X, Girin L, Horaud R (2020) A recurrent variational autoencoder for speech enhancement. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2020-May, pp 371–375
[121]
Lehtinen J, Munkberg J, Hasselgren J, Laine S, Karras T, Aittala M, Aila T (2018) Noise2noise: learning image restoration without clean data, arXiv preprint arXiv:1803.04189
[122]
León D, Tobar F (2021) Late reverberation suppression using U-nets, arXiv preprint arXiv:2110.02144., no. 1
[123]
Li K, Wu B, Lee CH (2016) An iterative phase recovery framework with phase mask for spectral mapping with an application to speech enhancement. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 08-12-Sept, pp 3773–3777
[124]
Li H, Zhang X, Zhang H, Gao G (2017) Integrated speech enhancement method based on weighted prediction error and DNN for dereverberation and denoising, arXiv preprint arXiv:1708.08251
[125]
Li ZX, Dai LR, Song Y, and McLoughlin I A conditional generative model for speech enhancement Circ Syst Signal Process 2018 37 11 5005-5022
[126]
Li Y, Zhang X, Chen D (2018b) CSRNet: dilated convolutional neural networks for understanding the highly congested scenes. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 1091–1100
[127]
Li Y, Sun Y, Horoshenkov K, and Naqvi SM Domain adaptation and autoencoder-based unsupervised speech enhancement IEEE Trans Artif Intell 2021 3 1 43-52
[128]
Li A, Liu W, Luo X, Yu G, Zheng C, Li X (2021b) A simultaneous denoising and dereverberation framework with target decoupling. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2, pp 796–800
[129]
Li H, Chen K, Wang L, Liu J, Wan B, and Zhou B Sound source separation mechanisms of different deep networks explained from the perspective of auditory perception Appl Sci 2022 12 2 832
[130]
Liao C-F, Tsao Y, Lee H-Y, Wang H-M (2018) Noise adaptive speech enhancement using domain adversarial training, arXiv preprint arXiv:1807.07501
[131]
Lim JS and Oppenheim AV Enhancement and bandwidth compression of noisy speech Proc IEEE 1979 67 12 1586-1604
[132]
Lin YC, Hsu YT, Fu SW, Tsao Y, Kuo TW (2019) IA-Net: acceleration and compression of speech enhancement using integer-adder deep neural network. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2019-September, pp 1801–1805
[133]
Liu Y and Wang D Divide and conquer: a deep CASA approach to talker-independent monaural speaker separation IEEE/ACM Trans Audio Speech Lang Process 2019 27 12 2092-2102
[134]
Liu AT, Yang SW, Chi PH, Hsu PC, Lee HY (2020) Mockingjay: unsupervised speech representation learning with deep bidirectional transformer encoders. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2020-May, pp 6419–6423
[135]
Liu AT, Li SW, and Lee HY TERA: self-supervised learning of transformer encoder representation for speech IEEE/ACM Trans Audio Speech Lang Process 2021 29 2351-2366
[136]
Liu H, Liu X, Kong Q, Tian Q, Zhao Y, Wang D, Huang C, Wang Y (2022) VoiceFixer: a unified framework for high-fidelity speech restoration, arXiv preprint arXiv:2204.05841, no. September, pp 4232–4236
[137]
Lluís F, Pons J, Serra X (2019) End-to-end music source separation: is it possible in the waveform domain? In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2019-September, pp 4619–4623
[138]
Loizou PC Speech enhancement: theory and practice 2013 BOca Raton CRC Press
[139]
Loizou PC and Kim G Reasons why current speech-enhancement algorithms do not improve speech intelligibility and suggested solutions IEEE Trans Audio Speech Lang Process 2011 19 1 47-56
[140]
Lu X, Tsao Y, Matsuda S, Hori C (2013) Speech enhancement based on deep denoising autoencoder. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, no. August, pp 436–440
[141]
Lu Y-J, Tsao Y, Watanabe S (2021) A study on speech enhancement based on diffusion probabilistic model. In: 2021 Asia-pacific signal and information processing association annual summit and conference (APSIPA ASC), 2021, pp 659–666. IEEE
[142]
Lu Y-J, Wang Z-Q, Watanabe S, Richard A, Yu C, Tsao Y (2022) Conditional diffusion probabilistic model for speech enhancement. In: ICASSP 2022-2022 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 7402–7406. IEEE
[143]
Luo C (2022) Understanding diffusion models: a unified perspective, arXiv preprint arXiv:2208.11970
[144]
Luo Y, Mesgarani N (2018) TaSNet: time-domain audio separation network for real-time, single-channel speech separation. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2018-April, pp 696–700
[145]
Luo Y and Mesgarani N Conv-TasNet: surpassing ideal time-frequency magnitude masking for speech separation IEEE/ACM Trans Audio Speech Lang Process 2019 27 8 1256-1266
[146]
Luo Y, Mesgarani N (2020) Separating varying numbers of sources with auxiliary autoencoding loss. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, pp 2622–2626
[147]
Luo Y, Chen Z, Hershey JR, Le Roux J, Mesgarani N (2017) Deep clustering and conventional networks for music separation: stronger together. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, pp 61–65
[148]
Luo Y, Chen Z, and Mesgarani N Speaker-independent speech separation with deep attractor network IEEE/ACM Trans Audio Speech Lang Process 2018 26 4 787-796
[149]
Luo Y, Chen Z, Yoshioka T (2020) Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol. 2020-May, pp 46–50
[150]
Luo J, Wang J, Cheng N, Xiao E, Zhang X, Xiao J (2022) Tiny-sepformer: a tiny time-domain transformer network for speech separation, arXiv preprint arXiv:2206.13689, no. 1, pp 5313–5317
[151]
Lutati S, Nachmani E, Wolf L (2022) SepIt: approaching a single channel speech separation bound, arXiv preprint arXiv:2205.11801, pp 5323–5327
[152]
Mao X, Li Q, Xie H, Lau RY, Wang Z, Paul Smolley S (2017) Least squares generative adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2794–2802
[153]
Martin-Donas JM, Gomez AM, Gonzalez JA, and Peinado AM A deep learning loss function based on the perceptual evaluation of the speech quality IEEE Signal Process Lett 2018 25 11 1680-1684
[154]
Miao Y, Zhang H, and Metze F Speaker adaptive training of deep neural network acoustic models using i-vectors IEEE/ACM Trans Audio Speech Lang Process 2015 23 11 1938-1949
[155]
Nábělek AK, Letowski TR, and Tucker FM Reverberant overlap- and self-masking in consonant identification J Acoust Soc Am 1989 86 4 1259-1265
[156]
Nachmani E, Adi Y, Wolf L (2020) Voice separation with an unknown number of multiple speakers. In: 37th international conference on machine learning, ICML 2020, vol PartF16814, pp 7121–7132
[157]
Narayanan A, Wang D (2013) Ideal ratio mask estimation using deep neural networks for robust speech recognition. In: 2013 IEEE international conference on acoustics, speech and signal processing, pp 7092–7096
[158]
Narayanan A and Wang D Improving robustness of deep neural network acoustic models via speech separation and joint adaptive training IEEE/ACM Trans Audio Speech Lang Process 2015 23 1 92-101
[159]
Natsiou A, O’Leary S (2021) Audio representations for deep learning in sound synthesis: a review. In: Proceedings of IEEE/ACS international conference on computer systems and applications, AICCSA, vol 2021-Decem
[160]
Naylor NDG, Patrick A (2010) Speech dereverberation. In: Naylor NDG Patrick A (ed) vol. 53, no. 1. Springer, London (2010)
[161]
Neumann TV, Kinoshita K, Delcroix M, Araki S, Nakatani T, Haeb-Umbach R (2019) All-neural online source separation, counting, and diarization for meeting analysis. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2019-May, pp 91–95
[162]
Nossier SA, Wall J, Moniri M, Glackin C, Cannings N (2020a) A comparative study of time and frequency domain approaches to deep learning based speech enhancement. In: Proceedings of the international joint conference on neural networks
[163]
Nossier SA, Wall J, Moniri M, Glackin C, Cannings N (2020b) Mapping and masking targets comparison using different deep learning based speech enhancement architectures. In: 2020 international joint conference on neural networks (IJCNN). IEEE, pp 1–8
[164]
Ochiai T, Matsuda S, Lu X, Hori C, Katagiri S (2014) Speaker adaptive training using deep neural networks. In: 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 6349–6353. IEEE
[165]
Oppenheim AV Discrete-time signal processing 1999 2nd ed Upper Saddle River Prentice-Hall
[166]
Oppenheim AV and Lim JS The importance of phase in signals Proc IEEE 1981 69 5 529-541
[167]
Paliwal K, Wójcicki K, and Shannon B The importance of phase in speech enhancement Speech Commun 2011 53 4 465-494
[168]
Pan SJ, Tsang IW, Kwok JT, and Yang Q Domain adaptation via transfer component analysis IEEE Trans Neural Netw 2010 22 2 199-210
[169]
Parveen S, Green P (2004) Speech enhancement with missing data techniques using recurrent neural networks. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 1, no. Figure 1, pp 13–16
[170]
Pascual S, Bonafonte A, Serra J (2017) SEGAN: speech enhancement generative adversarial network. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2017-August, no. D, pp 3642–3646
[171]
Pascual S, Park M, Serrà J, Bonafonte A, Ahn K-H (2018) Language and noise transfer in speech enhancement generative adversarial network. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 5019–5023. IEEE
[172]
Pascual S, Serrà J, Bonafonte A (2019) Towards generalized speech enhancement with generative adversarial networks. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2019-September, pp 1791–1795
[173]
Phan H, McLoughlin IV, Pham L, Chen OY, Koch P, De Vos M, and Mertins A Improving GANs for speech enhancement IEEE Signal Process Lett 2020 27 1700-1704
[174]
Portnoff MR Time-frequency representation of. digital signals IEEE Trans Acoust Speech Signal Process 1980 28 1 55-69
[175]
Qian K, Zhang Y, Chang S, Yang X, Florêncio D, Hasegawa-Johnson M (2017) Speech enhancement using bayesian wavenet. In: Interspeech, pp 2013–2017
[176]
Qin S and Jiang T Improved Wasserstein conditional generative adversarial network speech enhancement EURASIP J Wirel Commun Netw 2018 1 2018
[177]
Qin S, Jiang T, Wu S, Wang N, and Zhao X Graph convolution-based deep clustering for speech separation IEEE Access 2020 8 82 571-82 580
[178]
Qiu W and Hu Y Dual-path hybrid attention network for monaural speech separation IEEE Access 2022 10 78 754-78 763
[179]
Reddy CK, Gopal V, Cutler R (2021) Dnsmos: a non-intrusive perceptual objective speech quality metric to evaluate noise suppressors. In: ICASSP 2021-2021 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 6493–6497. IEEE
[180]
Rethage D, Pons J, Serra X (2018) A wavenet for speech denoising. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2018-April, pp 5069–5073
[181]
Rix AW, Beerends JG, Hollier MP, Hekstra AP (2001) Perceptual evaluation of speech quality (PESQ)—a new method for speech quality assessment of telephone networks and codecs. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2, pp 749–752
[182]
Roux JL, Wisdom S, Erdogan H, Hershey JR (2019) SDR—half-baked or well done? In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2019-May, pp 626–630
[183]
Sainath TN, Weiss RJ, Senior A, Wilson KW, Vinyals O (2015) Learning the speech front-end with raw waveform CLDNNs. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2015-January, pp 1–5
[184]
Saito K, Uhlich S, Fabbro G, Mitsufuji Y (2021) Training speech enhancement systems with noisy speech datasets, arXiv preprint arXiv:2105.12315
[185]
Schmidt MN, Olsson RK (2006) Single-channel speech separation using sparse non-negative matrix factorization. In: INTERSPEECH 2006 and 9th international conference on spoken language processing, INTERSPEECH 2006—ICSLP, vol 5, pp 2614–2617
[186]
Senior A, Lopez-Moreno I (2014) Improving DNN speaker independence with i-vector inputs. In: 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 225–229. IEEE
[187]
Shao Y and Wang D Model-based sequential organization in cochannel speech IEEE Trans Audio Speech Lang Process 2006 14 1 289-298
[188]
Shi J, Xu J, Liu G, Xu B (2018) Listen, think and listen again: capturing top-down auditory attention for speaker-independent speech separation. In: IJCAI international joint conference on artificial intelligence, vol 2018-July, pp 4353–4360
[189]
Shivakumar PG, Georgiou P (2016) Perception optimized deep denoising autoencoders for speech enhancement. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 08-12-September-2016, pp 3743–3747
[190]
Sohl-Dickstein J, Weiss E, Maheswaranathan N, Ganguli S (2015) Deep unsupervised learning using nonequilibrium thermodynamics. In: International conference on machine learning. PMLR, pp 2256–2265
[191]
Stoller D, Ewert S, Dixon S (2018) Wave-u-net: a multi-scale neural network for end-to-end audio source separation, arXiv preprint arXiv:1806.03185
[192]
Subakan C, Ravanelli M, Cornell S, Bronzi M, Zhong J (2021) Attention is all you need in speech separation. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2021-June, pp 21–25
[193]
Subakan C, Ravanelli M, Cornell S, Grondin F, Bronzi M (2022a) On using transformers for speech-separation. In: International workshop on acoustic signal enhancement, vol 14, no. 8, pp 1–10. arXiv:org/abs/2202.02884
[194]
Subakan C, Ravanelli M, Cornell S, Lepoutre F, Grondin F (2022b) Resource-efficient separation transformer, arXiv preprint arXiv:2206.09507, pp 1–5
[195]
Su J, Jin Z, Finkelstein A (2020) HiFi-GAN: High-fidelity denoising and dereverberation based on speech deep features in adversarial networks. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, no. 3, pp 4506–4510
[196]
Sun H, Li S (2017) An optimization method for speech enhancement based on deep neural network. In: IOP conference series: earth and environmental science, vol 69, no 1
[197]
Taal CH, Hendriks RC, Heusdens R, Jensen J (2010) A short-time objective intelligibility measure for time-frequency weighted noisy speech. In: IEEE international conference on acoustics, speech, and signal processing, pp 4214–4217
[198]
Taal CH, Hendriks RC, Heusdens R, and Jensen J An algorithm for intelligibility prediction of time-frequency weighted noisy speech IEEE Trans Audio Speech Lang Process 2011 19 7 2125-2136
[199]
Tachibana H (2021) Towards listening to 10 people simultaneously: an efficient permutation invariant training of audio source separation using Sinkhorn’s algorithm. In: ICASSP, IEEE international conference on acoustics, speech and signal processing— proceedings, vol 2021-June, pp 491–495
[200]
Takahashi N, Parthasaarathy S, Goswami N, Mitsufuji Y (2019) Recursive speech separation for unknown number of speakers. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2019-September, pp 1348–1352
[201]
Tan K and Wang D Towards model compression for deep learning based speech enhancement IEEE/ACM Trans Audio Speech Lang Process 2021 29 1785-1794
[202]
Trinh VA, Braun S (2022) Unsupervised speech enhancement with speech recognition embedding and disentanglement losses. In: ICASSP 2022-2022 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 391–395. IEEE
[203]
Tu Y, Du J, Xu Y, Dai L, Lee CH (2014) Deep neural network based speech separation for robust speech recognition. In: International conference on signal processing proceedings, ICSP, vol 2015-January, no. October, pp 532–536
[204]
Tzinis E, Venkataramani S, Wang Z, Subakan C, Smaragdis P (2020a) Two-step sound source separation: training on learned latent targets. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2020-May, pp 31–35
[205]
Tzinis E, Wang Z, Smaragdis P (2020b) Sudo RM -RF: efficient networks for universal audio source separation. in: IEEE international workshop on machine learning for signal processing, MLSP, vol 2020-September
[206]
Tzinis E, Adi Y, Ithapu VK, Xu B, Smaragdis P, and Kumar A Remixit: continual self-training of speech enhancement models via bootstrapped remixing IEEE J Sel Topics Signal Process 2022 16 6 1329-1341
[207]
Ueda Y, Wang L, Kai A, Xiao X, Chng ES, and Li H Single-channel dereverberation for distant-talking speech recognition by combining denoising autoencoder and temporal structure normalization J Signal Process Syst 2016 82 2 151-161
[208]
Valin J-M, Giri R, Venkataramani S, Isik U, Krishnaswamy A (2022) To dereverb or not to dereverb? Perceptual studies on real-time dereverberation targets, arXiv:2206.07917
[209]
van den Oord A, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kalchbrenner N, Senior A, Kavukcuoglu K (2016) Wavenet: a generative model for raw audio, arXiv preprint arXiv:1609.03499, pp 1–15
[210]
Vary P and Eurasip M Noise suppression by spectral magnitude estimation-mechanism and theoretical limits Signal Process 1985 8 4 387-400
[211]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, and Polosukhin I Attention is all you need Adv Neural Inf Process Syst 2017 2017 5999-6009
[212]
Venkataramani S, Casebeer J, Smaragdis P (2018) End-to-end source separation with adaptive front-ends. In: 2018 52nd asilomar conference on signals, systems, and computers, no. 1, pp 684–688
[213]
Veselỳ K, Watanabe S, Žmolíková K, Karafiát M, Burget L, Černockỳ JH (2016) Sequence summarizing neural network for speaker adaptation. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 5315–5319. IEEE
[214]
Virtanen T (2006) Speech recognition using factorial hidden Markov models for separation in the feature space. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 1, pp 89–92
[215]
Virtanen T, Cemgil AT (2009) Mixtures of gamma priors for non-negative matrix factorization based speech separation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 5441, no 3, pp 646–653
[216]
von Neumann T, Boeddeker C, Drude L, Kinoshita K, Delcroix M, Nakatani T, Haeb-Umbach R (2020) Multi-talker ASR for an unknown number of sources: joint training of source counting, separation and ASR. In: Proceedings of the annual conference of the international speech communication association, INTERSPEECH, vol 2020-October, pp 3097–3101
[217]
Wang D (2008) Time—frequency masking for speech hearing aid design. Trends in amplification, vol 12, pp 332–353. http://www.ncbi.nlm.nih.gov/pubmed/18974204
[218]
Wang D and Chen J Supervised speech separation based on deep learning: an overview IEEE/ACM Trans Audio Speech Lang Process 2018 26 10 1702-1726
[219]
Wang DL and Lim JS The unimportance of phase in speech enhancement IEEE Trans Acoust Speech Signal Process 1982 30 4 679-681
[220]
Wang Z, Sha F (2014) Discriminative non-negative matrix factorization for single-channel speech separation. In: 2014 IEEE international conference on acoustic, speech and signal processing (ICASSP), pp 3777–3781 https://pdfs.semanticscholar.org/854a/454106bd42a8bca158426d8b12b48ba0cae8.pdf
[221]
Wang Y and Wang DL Towards scaling up classification-based speech separation IEEE Trans Audio Speech Lang Process 2013 21 7 1381-1390
[222]
Wang L and Yoon KJ Knowledge distillation and student-teacher learning for visual intelligence: a review and new outlooks IEEE Trans Pattern Anal Mach Intell 2022 44 6 3048-3068
[223]
Wang Y, Han K, and Wang D Exploring monaural features for classification-based speech segregation IEEE Trans Audio Speech Lang Process 2013 21 2 270-279
[224]
Wang Y, Narayanan A, and Wang DL On training targets for supervised speech separation IEEE/ACM Trans Audio Speech Lang Process 2014 22 12 1849-1858
[225]
Wang Y, Du J, Dai L-R, Lee C-H (2016) Unsupervised single-channel speech separation via deep neural network for different gender mixtures. In: 2016 Asia-pacific signal and information processing association annual summit and conference (APSIPA), pp 1–4. IEEE
[226]
Wang Y, Du J, Dai LR, and Lee CH A gender mixture detection approach to unsupervised single-channel speech separation based on deep neural networks IEEE/ACM Trans Audio Speech Lang Process 2017 25 7 1535-1546
[227]
Wang ZQ, Roux JL, Hershey JR (2018a) Alternative objective functions for deep clustering. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2018-April, pp 686–690
[228]
Wang J, Chen J, Su D, Chen L, Yu M, Qian Y, Yu D (2018b) Deep extractor network for target speaker recovery from single channel speech mixtures, arXiv preprint arXiv:1807.08974
[229]
Wang Q, Muckenhirn H, Wilson K, Sridhar P, Wu Z, Hershey J, Saurous RA, Weiss RJ, Jia Y, Moreno IL (2018c) Voicefilter: targeted voice separation by speaker-conditioned spectrogram masking, arXiv preprint arXiv:1810.04826
[230]
Wang Q, Rao W, Sun S, Xie L, Chng ES, Li H (2018d) Unsupervised domain adaptation via domain adversarial training for speaker recognition. In: 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 4889–4893. IEEE
[231]
Wang ZQ, Tan K, Wang D (2019) Deep learning based phase reconstruction for speaker separation: a trigonometric perspective. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol 2019-May, pp. 71–75
[232]
Wang S, Li BZ, Khabsa M, Fang H, Ma H (2020) Linformer: self-attention with linear complexity, vol 2048, no. 2019. arXiv:org/abs/2006.04768
[233]
Wang K, He B, Zhu WP (2021) Tstnn: two-stage transformer based neural network for speech enhancement in the time domain. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, vol. 2021-June, pp 7098–7102
[234]
Weng C, Yu D, Seltzer ML, and Droppo J Deep neural networks for single-channel multi-talker speech recognition IEEE/ACM Trans Audio Speech Lang Process 2015 23 10 1670-1679
[235]
Weninger F, Hershey JR, Le Roux J, Schuller B (2014) Discriminatively trained recurrent neural networks for single-channel speech separation. In: 2014 IEEE global conference on signal and information processing, GlobalSIP 2014, pp 577–581
[236]
Weninger F, Erdogan H, Watanabe S, Vincent E, Le Roux J, Hershey JR, Schuller B (2015) Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 9237, pp 91–99
[237]
Wichern G, Lukin A (2017) Low-latency approximation of bidirectional recurrent networks for speech denoising. In: IEEE workshop on applications of signal processing to audio and acoustics, vol 2017-October, pp 66–70
[238]
Williamson DS, Wang D (2017a) Speech dereverberation and denoising using complex ratio masks. In: IEEE international conference on acoustics, speech, and signal processing (ICASSP) 2017, pp 5590–5594
[239]
Williamson DS and Wang D Time-frequency masking in the complex domain for speech dereverberation and denoising IEEE/ACM Trans Audio Speech Lang Process 2017 25 7 1492-1501
[240]
Williamson DS, Wang Y, and Wang DL Complex ratio masking for monaural speech separation IEEE/ACM Trans Audio Speech Lang Process 2016 24 3 483-492
[241]
Wisdom S, Tzinis E, Erdogan H, Weiss RJ, Wilson K, Hershey JR (2020) Unsupervised sound separation using mixture invariant training. In: Advances in neural information processing systems, vol 2020-December, june 2020. arXiv:org/abs/2006.12701
[242]
Wu JY, Yu C, Fu SW, Liu CT, Chien SY, and Tsao Y Increasing compactness of deep learning based speech enhancement models with parameter pruning and quantization techniques IEEE Signal Process Lett 2019 26 12 1887-1891
[243]
Xia B, Bao C (2014) Wiener filtering based speech enhancement with Weighted Denoising Auto-encoder and noise classification, pp 13–29
[244]
Xiang Y and Bao C A parallel-data-free speech enhancement method using multi-objective learning cycle-consistent generative adversarial network IEEE/ACM Trans Audio Speech Lang Process 2020 28 1826-1838
[245]
Xiao X, Chen Z, Yoshioka T, Erdogan H, Liu C, Dimitriadis D, Droppo J, Gong Y (2019) Single-channel speech extraction using speaker inventory and attention network. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 86–90
[246]
Xiao F, Guan J, Kong Q, Wang W (2021) Time-domain speech enhancement with generative adversarial learning, arXiv preprint arXiv:2103.16149
[247]
Xu Y, Du J, Dai LR, and Lee CH An experimental study on speech enhancement based on deep neural networks IEEE Signal Process Lett 2014 21 1 65-68
[248]
Xu Y, Du J, Dai L-R, Lee C-H (2014b) Cross-language transfer learning for deep neural network based speech enhancement. In: The 9th international symposium on chinese spoken language processing, pp 336–340. IEEE
[249]
Xu Y, Du J, Dai L-R, Lee C-H (2014c) Global variance equalization for improving deep neural network based speech enhancement. In: 2014 IEEE China summit & international conference on signal and information processing (ChinaSIP). IEEE, pp 71–75
[250]
Xu Y, Du J, Dai LR, and Lee CH A regression approach to speech enhancement based on deep neural networks IEEE/ACM Trans Audio Speech Lang Process 2015 23 1 7-19
[251]
Yan Z, Buye X, Ritwik G, Tao Z (2018) Perceptually guided speech enhancement using deep neural networks. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, pp 5074–5078
[252]
Ye F, Tsao Y, Chen F (2019) Subjective feedback-based neural network pruning for speech enhancement. In: 2019 Asia-pacific signal and information processing association annual summit and conference, APSIPA ASC 2019, pp 673–677
[253]
Yu D, Kolbaek M, Tan ZH, Jensen J (2017) Permutation invariant training of deep models for speaker-independent multi-talker speech separation. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, pp 241–245
[254]
Yul D, Kalbcek M, Tan Z-H, Jensen J (2017) Speaker-independent multi-talker speech separation. In: IEEE international conference on acoustics, speech and signal processing, pp 241–245
[255]
Zeghidour N and Grangier D Wavesplit: end-to-end speech separation by speaker clustering IEEE/ACM Trans Audio Speech Lang Process 2021 29 4 2840-2849
[256]
Zhang XL and Wang D A deep ensemble learning method for monaural speech separation IEEE/ACM Trans Audio Speech Lang Process 2016 24 5 967-977
[257]
Zhang H, Zhang X, Gao G (2018) Training supervised speech separation system to improve STOI and PESQ directly. In: ICASSP, IEEE international conference on acoustics, speech and signal processing—proceedings, pp 5374–5378
[258]
Zhang Q, Nicolson A, Wang M, Paliwal KK, and Wang C DeepMMSE: a deep learning approach to mmse-based noise power spectral density estimation IEEE/ACM Trans Audio Speech Lang Process 2020 28 1404-1415
[259]
Zhang L, Shi Z, Han J, Shi A, Ma D (2020b) FurcaNeXt: end-to-end monaural speech separation with dynamic gated dilated temporal convolutional networks, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 11961 LNCS, pp 653–665
[260]
Zhang C, Yu M, Weng C, Yu D (2021a) Towards robust speaker verification with target speaker enhancement. In: ICASSP 2021-2021 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 6693–6697. IEEE
[261]
Zhang J, Zorila C, Doddipatla R, Barker J (2021b) Teacher-student mixit for unsupervised and semi-supervised speech separation, arXiv preprint arXiv:2106.07843
[262]
Zhao Y, Wang ZQ, and Wang D Two-stage deep learning for noisy-reverberant speech enhancement IEEE/ACM Trans Audio Speech Lang Process 2019 27 1 53-62
[263]
Zhao Y, Wang D, Xu B, and Zhang T Monaural speech dereverberation using temporal convolutional networks with self attention IEEE/ACM Trans Audio Speech Lang Process 2020 28 1598-1607
[264]
Zheng N and Zhang XL Phase-aware speech enhancement based on deep neural networks IEEE/ACM Trans Audio Speech Lang Process 2019 27 1 63-76
[265]
Zhou R, Zhu W, Li X (2022) Single-channel speech dereverberation using subband network with a reverberation time shortening target, arXiv preprint arXiv:2210.11089arXiv:2204.08765
[266]
Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232
[267]
Zolnay A, Kocharov D, Schlüter R, and Ney H Using multiple acoustic feature sets for speech recognition Speech Commun 2007 49 6 514-525

Index Terms

  1. Deep neural network techniques for monaural speech enhancement and separation: state of the art analysis
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image Artificial Intelligence Review
        Artificial Intelligence Review  Volume 56, Issue Suppl 3
        Dec 2023
        1028 pages

        Publisher

        Kluwer Academic Publishers

        United States

        Publication History

        Published: 25 October 2023
        Accepted: 18 September 2023

        Author Tags

        1. Dereverberation
        2. Denoising
        3. Speaker separation
        4. Deep neural network
        5. Time-domain
        6. Speech enhancement
        7. Speech extraction

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 0
          Total Downloads
        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 28 Sep 2024

        Other Metrics

        Citations

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media