default search action
Liang Lu 0001
Person information
- affiliation: Microsoft, Bellevue, WA, USA
- affiliation (former): Toyota Technological Institute at Chicago, Chicago, IL, USA
- affiliation (former): University of Edinburgh, Centre for Speech Technology Research, UK
Other persons with the same name
- Liang Lu — disambiguation page
- Liang Lu 0002 — Beijing University of Posts and Telecommunications, Beijing, China
- Liang Lu 0003 — Sichuan University, College of Electronics and Information Engineering, China
- Liang Lu 0004 — Jilin University, College of Communication Engineering, Changchun, China
- Liang Lu 0005 — Technical University of Madrid, Spain
- Liang Lu 0006 — Tongji University, School of Mechanical Engineering, Shanghai, China
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2022
- [c42]Liang Lu, Jinyu Li, Yifan Gong:
Endpoint Detection for Streaming End-to-End Multi-Talker ASR. ICASSP 2022: 7312-7316 - [c41]Desh Raj, Liang Lu, Zhuo Chen, Yashesh Gaur, Jinyu Li:
Continuous Streaming Multi-Talker ASR with Dual-Path Transducers. ICASSP 2022: 7317-7321 - [i27]Liang Lu, Jinyu Li, Yifan Gong:
Endpoint Detection for Streaming End-to-End Multi-talker ASR. CoRR abs/2201.09979 (2022) - 2021
- [j7]Liang Lu, Naoyuki Kanda, Jinyu Li, Yifan Gong:
Streaming End-to-End Multi-Talker Speech Recognition. IEEE Signal Process. Lett. 28: 803-807 (2021) - [c40]Eric Sun, Liang Lu, Zhong Meng, Yifan Gong:
Sequence-Level Self-Teaching Regularization. ICASSP 2021: 2945-2949 - [c39]Naoyuki Kanda, Zhong Meng, Liang Lu, Yashesh Gaur, Xiaofei Wang, Zhuo Chen, Takuya Yoshioka:
Minimum Bayes Risk Training for End-to-End Speaker-Attributed ASR. ICASSP 2021: 6503-6507 - [c38]Zhong Meng, Naoyuki Kanda, Yashesh Gaur, Sarangarajan Parthasarathy, Eric Sun, Liang Lu, Xie Chen, Jinyu Li, Yifan Gong:
Internal Language Model Training for Domain-Adaptive End-To-End Speech Recognition. ICASSP 2021: 7338-7342 - [c37]Liang Lu, Naoyuki Kanda, Jinyu Li, Yifan Gong:
Streaming Multi-Talker Speech Recognition with Joint Speaker Identification. Interspeech 2021: 1782-1786 - [c36]Zhong Meng, Yu Wu, Naoyuki Kanda, Liang Lu, Xie Chen, Guoli Ye, Eric Sun, Jinyu Li, Yifan Gong:
Minimum Word Error Rate Training with Language Model Fusion for End-to-End Speech Recognition. Interspeech 2021: 2596-2600 - [c35]Liang Lu, Zhong Meng, Naoyuki Kanda, Jinyu Li, Yifan Gong:
On Minimum Word Error Rate Training of the Hybrid Autoregressive Transducer. Interspeech 2021: 3435-3439 - [c34]Zhong Meng, Sarangarajan Parthasarathy, Eric Sun, Yashesh Gaur, Naoyuki Kanda, Liang Lu, Xie Chen, Rui Zhao, Jinyu Li, Yifan Gong:
Internal Language Model Estimation for Domain-Adaptive End-to-End Speech Recognition. SLT 2021: 243-250 - [i26]Zhong Meng, Naoyuki Kanda, Yashesh Gaur, Sarangarajan Parthasarathy, Eric Sun, Liang Lu, Xie Chen, Jinyu Li, Yifan Gong:
Internal Language Model Training for Domain-Adaptive End-to-End Speech Recognition. CoRR abs/2102.01380 (2021) - [i25]Liang Lu, Naoyuki Kanda, Jinyu Li, Yifan Gong:
Streaming Multi-talker Speech Recognition with Joint Speaker Identification. CoRR abs/2104.02109 (2021) - [i24]Zhong Meng, Yu Wu, Naoyuki Kanda, Liang Lu, Xie Chen, Guoli Ye, Eric Sun, Jinyu Li, Yifan Gong:
Minimum Word Error Rate Training with Language Model Fusion for End-to-End Speech Recognition. CoRR abs/2106.02302 (2021) - [i23]Desh Raj, Liang Lu, Zhuo Chen, Yashesh Gaur, Jinyu Li:
Continuous Streaming Multi-Talker ASR with Dual-path Transducers. CoRR abs/2109.08555 (2021) - 2020
- [c33]Hirofumi Inaguma, Yashesh Gaur, Liang Lu, Jinyu Li, Yifan Gong:
Minimum Latency Training Strategies for Streaming Sequence-to-Sequence ASR. ICASSP 2020: 6064-6068 - [c32]Hu Hu, Rui Zhao, Jinyu Li, Liang Lu, Yifan Gong:
Exploring Pre-Training with Alignments for RNN Transducer Based End-to-End Speech Recognition. ICASSP 2020: 7079-7083 - [c31]Zhuo Chen, Takuya Yoshioka, Liang Lu, Tianyan Zhou, Zhong Meng, Yi Luo, Jian Wu, Xiong Xiao, Jinyu Li:
Continuous Speech Separation: Dataset and Analysis. ICASSP 2020: 7284-7288 - [c30]Chengyi Wang, Yu Wu, Yujiao Du, Jinyu Li, Shujie Liu, Liang Lu, Shuo Ren, Guoli Ye, Sheng Zhao, Ming Zhou:
Semantic Mask for Transformer Based End-to-End Speech Recognition. INTERSPEECH 2020: 971-975 - [c29]Jeremy Heng Meng Wong, Yashesh Gaur, Rui Zhao, Liang Lu, Eric Sun, Jinyu Li, Yifan Gong:
Combination of End-to-End and Hybrid Models for Speech Recognition. INTERSPEECH 2020: 1783-1787 - [c28]Chengyi Wang, Yu Wu, Liang Lu, Shujie Liu, Jinyu Li, Guoli Ye, Ming Zhou:
Low Latency End-to-End Streaming Speech Recognition with a Scout Network. INTERSPEECH 2020: 2112-2116 - [c27]Liang Lu, Changliang Liu, Jinyu Li, Yifan Gong:
Exploring Transformers for Large-Scale Speech Recognition. INTERSPEECH 2020: 5041-5045 - [i22]Zhuo Chen, Takuya Yoshioka, Liang Lu, Tianyan Zhou, Zhong Meng, Yi Luo, Jian Wu, Jinyu Li:
Continuous speech separation: dataset and analysis. CoRR abs/2001.11482 (2020) - [i21]Hirofumi Inaguma, Yashesh Gaur, Liang Lu, Jinyu Li, Yifan Gong:
Minimum Latency Training Strategies for Streaming Sequence-to-Sequence ASR. CoRR abs/2004.05009 (2020) - [i20]Hu Hu, Rui Zhao, Jinyu Li, Liang Lu, Yifan Gong:
Exploring Pre-training with Alignments for RNN Transducer based End-to-End Speech Recognition. CoRR abs/2005.00572 (2020) - [i19]Liang Lu, Changliang Liu, Jinyu Li, Yifan Gong:
Exploring Transformers for Large-Scale Speech Recognition. CoRR abs/2005.09684 (2020) - [i18]Liang Lu, Zhong Meng, Naoyuki Kanda, Jinyu Li, Yifan Gong:
On Minimum Word Error Rate Training of the Hybrid Autoregressive Transducer. CoRR abs/2010.12673 (2020) - [i17]Zhong Meng, Sarangarajan Parthasarathy, Eric Sun, Yashesh Gaur, Naoyuki Kanda, Liang Lu, Xie Chen, Rui Zhao, Jinyu Li, Yifan Gong:
Internal Language Model Estimation for Domain-Adaptive End-to-End Speech Recognition. CoRR abs/2011.01991 (2020) - [i16]Naoyuki Kanda, Zhong Meng, Liang Lu, Yashesh Gaur, Xiaofei Wang, Zhuo Chen, Takuya Yoshioka:
Minimum Bayes Risk Training for End-to-End Speaker-Attributed ASR. CoRR abs/2011.02921 (2020) - [i15]Liang Lu, Naoyuki Kanda, Jinyu Li, Yifan Gong:
Streaming end-to-end multi-talker speech recognition. CoRR abs/2011.13148 (2020)
2010 – 2019
- 2019
- [c26]Peidong Wang, Zhuo Chen, Xiong Xiao, Zhong Meng, Takuya Yoshioka, Tianyan Zhou, Liang Lu, Jinyu Li:
Speech Separation Using Speaker Inventory. ASRU 2019: 230-236 - [c25]Jinyu Li, Liang Lu, Changliang Liu, Yifan Gong:
Improving Layer Trajectory LSTM with Future Context Frames. ICASSP 2019: 6550-6554 - [c24]Liang Lu, Eric Sun, Yifan Gong:
Self-Teaching Networks. INTERSPEECH 2019: 2798-2802 - [i14]Liang Lu, Xiong Xiao, Zhuo Chen, Yifan Gong:
PyKaldi2: Yet another speech toolkit based on Kaldi and PyTorch. CoRR abs/1907.05955 (2019) - [i13]Liang Lu, Eric Sun, Yifan Gong:
Self-Teaching Networks. CoRR abs/1909.04157 (2019) - [i12]Chengyi Wang, Yu Wu, Yujiao Du, Jinyu Li, Shujie Liu, Liang Lu, Shuo Ren, Guoli Ye, Sheng Zhao, Ming Zhou:
Semantic Mask for Transformer based End-to-End Speech Recognition. CoRR abs/1912.03010 (2019) - 2018
- [c23]Kalpesh Krishna, Liang Lu, Kevin Gimpel, Karen Livescu:
A Study of All-Convolutional Encoders for Connectionist Temporal Classification. ICASSP 2018: 5814-5818 - [c22]Jinyu Li, Liang Lu, Changliang Liu, Yifan Gong:
Exploring Layer Trajectory LSTM with Depth Processing Units and Attention. SLT 2018: 456-462 - 2017
- [j6]Hao Tang, Liang Lu, Lingpeng Kong, Kevin Gimpel, Karen Livescu, Chris Dyer, Noah A. Smith, Steve Renals:
End-to-End Neural Segmental Models for Speech Recognition. IEEE J. Sel. Top. Signal Process. 11(8): 1254-1264 (2017) - [j5]Liang Lu, Steve Renals:
Small-Footprint Highway Deep Neural Networks for Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 25(7): 1502-1511 (2017) - [c21]Liang Lu, Michelle Guo, Steve Renals:
Knowledge distillation for small-footprint highway networks. ICASSP 2017: 4820-4824 - [c20]Ben Krause, Iain Murray, Steve Renals, Liang Lu:
Multiplicative LSTM for sequence modelling. ICLR (Workshop) 2017 - [c19]Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith:
Multitask Learning with CTC and Segmental CRF for Speech Recognition. INTERSPEECH 2017: 954-958 - [c18]Shubham Toshniwal, Hao Tang, Liang Lu, Karen Livescu:
Multitask Learning with Low-Level Auxiliary Tasks for Encoder-Decoder Based Speech Recognition. INTERSPEECH 2017: 3532-3536 - [p1]Xiong Xiao, Shinji Watanabe, Hakan Erdogan, Michael I. Mandel, Liang Lu, John R. Hershey, Michael L. Seltzer, Guoguo Chen, Yu Zhang, Dong Yu:
Discriminative Beamforming with Phase-Aware Neural Networks for Speech Enhancement and Recognition. New Era for Robust Speech Recognition, Exploiting Deep Learning 2017: 79-104 - [i11]Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith:
Multi-task Learning with CTC and Segmental CRF for Speech Recognition. CoRR abs/1702.06378 (2017) - [i10]Shubham Toshniwal, Hao Tang, Liang Lu, Karen Livescu:
Multitask Learning with Low-Level Auxiliary Tasks for Encoder-Decoder Based Speech Recognition. CoRR abs/1704.01631 (2017) - [i9]Hao Tang, Liang Lu, Lingpeng Kong, Kevin Gimpel, Karen Livescu, Chris Dyer, Noah A. Smith, Steve Renals:
End-to-End Neural Segmental Models for Speech Recognition. CoRR abs/1708.00531 (2017) - [i8]Kalpesh Krishna, Liang Lu, Kevin Gimpel, Karen Livescu:
A Study of All-Convolutional Encoders for Connectionist Temporal Classification. CoRR abs/1710.10398 (2017) - 2016
- [c17]Liang Lu, Xingxing Zhang, Steve Renals:
On training the recurrent neural network encoder-decoder for large vocabulary end-to-end speech recognition. ICASSP 2016: 5060-5064 - [c16]Tian Tan, Yanmin Qian, Dong Yu, Souvik Kundu, Liang Lu, Khe Chai Sim, Xiong Xiao, Yu Zhang:
Speaker-aware training of LSTM-RNNS for acoustic modelling. ICASSP 2016: 5280-5284 - [c15]Xiong Xiao, Shinji Watanabe, Hakan Erdogan, Liang Lu, John R. Hershey, Michael L. Seltzer, Guoguo Chen, Yu Zhang, Michael I. Mandel, Dong Yu:
Deep beamforming networks for multi-channel speech recognition. ICASSP 2016: 5745-5749 - [c14]Liang Lu, Steve Renals:
Small-Footprint Deep Neural Networks with Highway Connections for Speech Recognition. INTERSPEECH 2016: 12-16 - [c13]Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith, Steve Renals:
Segmental Recurrent Neural Networks for End-to-End Speech Recognition. INTERSPEECH 2016: 385-389 - [c12]Xingxing Zhang, Liang Lu, Mirella Lapata:
Top-down Tree Long Short-Term Memory Networks. HLT-NAACL 2016: 310-320 - [i7]Liang Lu, Lingpeng Kong, Chris Dyer, Noah A. Smith, Steve Renals:
Segmental Recurrent Neural Networks for End-to-end Speech Recognition. CoRR abs/1603.00223 (2016) - [i6]Liang Lu, Michelle Guo, Steve Renals:
Knowledge Distillation for Small-footprint Highway Networks. CoRR abs/1608.00892 (2016) - [i5]Ben Krause, Liang Lu, Iain Murray, Steve Renals:
Multiplicative LSTM for sequence modelling. CoRR abs/1609.07959 (2016) - [i4]Liang Lu, Steve Renals:
Small-footprint Highway Deep Neural Networks for Speech Recognition. CoRR abs/1610.05812 (2016) - 2015
- [c11]Liang Lu, Steve Renals:
Multi-frame factorisation for long-span acoustic modelling. ICASSP 2015: 4595-4599 - [c10]Liang Lu, Steve Renals:
Feature-space speaker adaptation for probabilistic linear discriminant analysis acoustic models. INTERSPEECH 2015: 2862-2866 - [c9]Liang Lu, Xingxing Zhang, Kyunghyun Cho, Steve Renals:
A study of the recurrent neural network encoder-decoder for large vocabulary speech recognition. INTERSPEECH 2015: 3249-3253 - [i3]Xingxing Zhang, Liang Lu, Mirella Lapata:
Tree Recurrent Neural Networks with Application to Language Modeling. CoRR abs/1511.00060 (2015) - [i2]Liang Lu, Steve Renals:
Small-footprint Deep Neural Networks with Highway Connections for Speech Recognition. CoRR abs/1512.04280 (2015) - 2014
- [j4]Liang Lu, Steve Renals:
Probabilistic Linear Discriminant Analysis for Acoustic Modeling. IEEE Signal Process. Lett. 21(6): 702-706 (2014) - [j3]Liang Lu, Arnab Ghoshal, Steve Renals:
Cross-Lingual Subspace Gaussian Mixture Models for Low-Resource Speech Recognition. IEEE ACM Trans. Audio Speech Lang. Process. 22(1): 17-27 (2014) - [c8]Liang Lu, Steve Renals:
Probabilistic linear discriminant analysis with bottleneck features for speech recognition. INTERSPEECH 2014: 910-914 - [i1]Liang Lu, Steve Renals:
Tied Probabilistic Linear Discriminant Analysis for Speech Recognition. CoRR abs/1411.0895 (2014) - 2013
- [b1]Liang Lu:
Subspace Gaussian mixture models for automatic speech recognition. University of Edinburgh, UK, 2013 - [j2]Liang Lu, K. K. Chin, Arnab Ghoshal, Stephen Renals:
Joint Uncertainty Decoding for Noise Robust Subspace Gaussian Mixture Models. IEEE Trans. Speech Audio Process. 21(9): 1791-1804 (2013) - [c7]Liang Lu, Arnab Ghoshal, Steve Renals:
Acoustic data-driven pronunciation lexicon for large vocabulary speech recognition. ASRU 2013: 374-379 - [c6]Liang Lu, Arnab Ghoshal, Steve Renals:
Noise adaptive training for subspace Gaussian mixture models. INTERSPEECH 2013: 3492-3496 - 2012
- [c5]Liang Lu, Arnab Ghoshal, Steve Renals:
Maximum a posteriori adaptation of subspace Gaussian mixture models for cross-lingual speech recognition. ICASSP 2012: 4877-4880 - [c4]Liang Lu, Arnab Ghoshal, Steve Renals:
Joint uncertainty decoding with unscented transform for noise robust subspace Gaussian mixture models. SAPA@INTERSPEECH 2012: 40-45 - [c3]Liang Lu, K. K. Chin, Arnab Ghoshal, Steve Renals:
Noise Compensation for Subspace Gaussian Mixture Models. INTERSPEECH 2012: 306-309 - 2011
- [j1]Liang Lu, Arnab Ghoshal, Steve Renals:
Regularized Subspace Gaussian Mixture Models for Speech Recognition. IEEE Signal Process. Lett. 18(7): 419-422 (2011) - [c2]Liang Lu, Arnab Ghoshal, Steve Renals:
Regularized subspace Gaussian mixture models for cross-lingual speech recognition. ASRU 2011: 365-370 - 2010
- [c1]Ken'ichi Kumatani, Liang Lu, John W. McDonough, Arnab Ghoshal, Dietrich Klakow:
Maximum negentropy beamforming with superdirectivity. EUSIPCO 2010: 2067-2071
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-28 21:23 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint