default search action
Vimal Manohar
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c29]Ruizhe Huang, Xiaohui Zhang, Zhaoheng Ni, Li Sun, Moto Hira, Jeff Hwang, Vimal Manohar, Vineel Pratap, Matthew Wiesner, Shinji Watanabe, Daniel Povey, Sanjeev Khudanpur:
Less Peaky and More Accurate CTC Forced Alignment by Label Priors. ICASSP 2024: 11831-11835 - [i10]Ruizhe Huang, Xiaohui Zhang, Zhaoheng Ni, Li Sun, Moto Hira, Jeff Hwang, Vimal Manohar, Vineel Pratap, Matthew Wiesner, Shinji Watanabe, Daniel Povey, Sanjeev Khudanpur:
Less Peaky and More Accurate CTC Forced Alignment by Label Priors. CoRR abs/2406.02560 (2024) - 2023
- [c28]Tejas Jayashankar, Jilong Wu, Leda Sari, David Kant, Vimal Manohar, Qing He:
Self-Supervised Representations for Singing Voice Conversion. ICASSP 2023: 1-5 - [c27]Mumin Jin, Prashant Serai, Jilong Wu, Andros Tjandra, Vimal Manohar, Qing He:
Voice-Preserving Zero-Shot Multiple Accent Conversion. ICASSP 2023: 1-5 - [c26]Matthew Le, Apoorv Vyas, Bowen Shi, Brian Karrer, Leda Sari, Rashel Moritz, Mary Williamson, Vimal Manohar, Yossi Adi, Jay Mahadeokar, Wei-Ning Hsu:
Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale. NeurIPS 2023 - [i9]Matthew Le, Apoorv Vyas, Bowen Shi, Brian Karrer, Leda Sari, Rashel Moritz, Mary Williamson, Vimal Manohar, Yossi Adi, Jay Mahadeokar, Wei-Ning Hsu:
Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale. CoRR abs/2306.15687 (2023) - 2022
- [i8]Jason Fong, Yun Wang, Prabhav Agrawal, Vimal Manohar, Jilong Wu, Thilo Köhler, Qing He:
Towards zero-shot Text-based voice editing using acoustic context conditioning, utterance embeddings, and reference encoders. CoRR abs/2210.16045 (2022) - [i7]Mumin Jin, Prashant Serai, Jilong Wu, Andros Tjandra, Vimal Manohar, Qing He:
Voice-preserving Zero-shot Multiple Accent Conversion. CoRR abs/2211.13282 (2022) - 2021
- [c25]Vimal Manohar, Tatiana Likhomanenko, Qiantong Xu, Wei-Ning Hsu, Ronan Collobert, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed:
Kaizen: Continuously Improving Teacher Using Exponential Moving Average for Semi-Supervised Speech Recognition. ASRU 2021: 518-525 - [c24]Xiaohui Zhang, Vimal Manohar, David Zhang, Frank Zhang, Yangyang Shi, Nayan Singhal, Julian Chan, Fuchun Peng, Yatharth Saraf, Mike Seltzer:
On Lattice-Free Boosted MMI Training of HMM and CTC-Based Full-Context ASR Models. ASRU 2021: 1026-1033 - [i6]Vimal Manohar, Tatiana Likhomanenko, Qiantong Xu, Wei-Ning Hsu, Ronan Collobert, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed:
Kaizen: Continuously improving teacher using Exponential Moving Average for semi-supervised speech recognition. CoRR abs/2106.07759 (2021) - [i5]Xiaohui Zhang, Vimal Manohar, David Zhang, Frank Zhang, Yangyang Shi, Nayan Singhal, Julian Chan, Fuchun Peng, Yatharth Saraf, Mike Seltzer:
On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models. CoRR abs/2107.04154 (2021) - 2020
- [c23]Kritika Singh, Vimal Manohar, Alex Xiao, Sergey Edunov, Ross B. Girshick, Vitaliy Liptchinsky, Christian Fuegen, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed:
Large Scale Weakly and Semi-Supervised Learning for Low-Resource Video ASR. INTERSPEECH 2020: 3770-3774 - [i4]Kritika Singh, Vimal Manohar, Alex Xiao, Sergey Edunov, Ross B. Girshick, Vitaliy Liptchinsky, Christian Fuegen, Yatharth Saraf, Geoffrey Zweig, Abdelrahman Mohamed:
Large scale weakly and semi-supervised learning for low-resource video ASR. CoRR abs/2005.07850 (2020)
2010 – 2019
- 2019
- [c22]Jinyi Yang, Lucas Ondel, Vimal Manohar, Hynek Hermansky:
Towards Automatic Methods to Detect Errors in Transcriptions of Speech Recordings. ICASSP 2019: 3747-3751 - [c21]Vimal Manohar, Szu-Jui Chen, Zhiqi Wang, Yusuke Fujita, Shinji Watanabe, Sanjeev Khudanpur:
Acoustic Modeling for Overlapping Speech Recognition: Jhu Chime-5 Challenge System. ICASSP 2019: 6665-6669 - [c20]Ashish Arora, Paola García, Shinji Watanabe, Vimal Manohar, Yiwen Shao, Sanjeev Khudanpur, Chun-Chieh Chang, Babak Rekabdar, Bagher BabaAli, Daniel Povey, David Etter, Desh Raj, Hossein Hadian, Jan Trmal:
Using ASR Methods for OCR. ICDAR 2019: 663-668 - [c19]Yiming Wang, David Snyder, Hainan Xu, Vimal Manohar, Phani Sankar Nidadavolu, Daniel Povey, Sanjeev Khudanpur:
The JHU ASR System for VOiCES from a Distance Challenge 2019. INTERSPEECH 2019: 2488-2492 - 2018
- [c18]Vimal Manohar, Hossein Hadian, Daniel Povey, Sanjeev Khudanpur:
Semi-Supervised Training of Acoustic Models Using Lattice-Free MMI. ICASSP 2018: 4844-4848 - [c17]Matthew Maciejewski, David Snyder, Vimal Manohar, Najim Dehak, Sanjeev Khudanpur:
Characterizing Performance of Speaker Diarization Systems on Far-Field Speech Using Standard Methods. ICASSP 2018: 5244-5248 - [c16]Matthew Wiesner, Chunxi Liu, Lucas Ondel, Craig Harman, Vimal Manohar, Jan Trmal, Zhongqiang Huang, Najim Dehak, Sanjeev Khudanpur:
Automatic Speech Recognition and Topic Identification from Speech for Almost-Zero-Resource Languages. INTERSPEECH 2018: 2052-2056 - [c15]Gregory Sell, David Snyder, Alan McCree, Daniel Garcia-Romero, Jesús Villalba, Matthew Maciejewski, Vimal Manohar, Najim Dehak, Daniel Povey, Shinji Watanabe, Sanjeev Khudanpur:
Diarization is Hard: Some Experiences and Lessons Learned for the JHU Team in the Inaugural DIHARD Challenge. INTERSPEECH 2018: 2808-2812 - [c14]Vimal Manohar, Pegah Ghahremani, Daniel Povey, Sanjeev Khudanpur:
A Teacher-Student Learning Approach for Unsupervised Domain Adaptation of Sequence-Trained ASR Models. SLT 2018: 250-257 - [i3]Matthew Wiesner, Chunxi Liu, Lucas Ondel, Craig Harman, Vimal Manohar, Jan Trmal, Zhongqiang Huang, Sanjeev Khudanpur, Najim Dehak:
The JHU Speech LOREHLT 2017 System: Cross-Language Transfer for Situation-Frame Detection. CoRR abs/1802.08731 (2018) - 2017
- [j1]Mark A. Hasegawa-Johnson, Preethi Jyothi, Daniel McCloy, Majid Mirbagheri, Giovanni M. Di Liberto, Amit Das, Bradley Ekin, Chunxi Liu, Vimal Manohar, Hao Tang, Edmund C. Lalor, Nancy F. Chen, Paul Hager, Tyler Kekona, Rose Sloan, Adrian K. C. Lee:
ASR for Under-Resourced Languages From Probabilistic Transcription. IEEE ACM Trans. Audio Speech Lang. Process. 25(1): 46-59 (2017) - [c13]Pegah Ghahremani, Vimal Manohar, Hossein Hadian, Daniel Povey, Sanjeev Khudanpur:
Investigation of transfer learning for ASR using LF-MMI trained neural networks. ASRU 2017: 279-286 - [c12]Vimal Manohar, Daniel Povey, Sanjeev Khudanpur:
JHU Kaldi system for Arabic MGB-3 ASR challenge using diarization, audio-transcript alignment and transfer learning. ASRU 2017: 346-352 - [c11]Gaofeng Cheng, Vijayaditya Peddinti, Daniel Povey, Vimal Manohar, Sanjeev Khudanpur, Yonghong Yan:
An Exploration of Dropout with LSTMs. INTERSPEECH 2017: 1586-1590 - [c10]Xiaohui Zhang, Vimal Manohar, Daniel Povey, Sanjeev Khudanpur:
Acoustic Data-Driven Lexicon Learning Based on a Greedy Pronunciation Selection Framework. INTERSPEECH 2017: 2541-2545 - [c9]Jan Trmal, Matthew Wiesner, Vijayaditya Peddinti, Xiaohui Zhang, Pegah Ghahremani, Yiming Wang, Vimal Manohar, Hainan Xu, Daniel Povey, Sanjeev Khudanpur:
The Kaldi OpenKWS System: Improving Low Resource Keyword Search. INTERSPEECH 2017: 3597-3601 - [i2]Jan Trmal, Gaurav Kumar, Vimal Manohar, Sanjeev Khudanpur, Matt Post, Paul McNamee:
Using of heterogeneous corpora for training of an ASR system. CoRR abs/1706.00321 (2017) - [i1]Xiaohui Zhang, Vimal Manohar, Daniel Povey, Sanjeev Khudanpur:
Acoustic data-driven lexicon learning based on a greedy pronunciation selection framework. CoRR abs/1706.03747 (2017) - 2016
- [c8]Chunxi Liu, Preethi Jyothi, Hao Tang, Vimal Manohar, Rose Sloan, Tyler Kekona, Mark Hasegawa-Johnson, Sanjeev Khudanpur:
Adapting ASR for under-resourced languages using mismatched transcriptions. ICASSP 2016: 5840-5844 - [c7]Vijayaditya Peddinti, Vimal Manohar, Yiming Wang, Daniel Povey, Sanjeev Khudanpur:
Far-Field ASR Without Parallel Data. INTERSPEECH 2016: 1996-2000 - [c6]Daniel Povey, Vijayaditya Peddinti, Daniel Galvez, Pegah Ghahremani, Vimal Manohar, Xingyu Na, Yiming Wang, Sanjeev Khudanpur:
Purely Sequence-Trained Neural Networks for ASR Based on Lattice-Free MMI. INTERSPEECH 2016: 2751-2755 - [c5]Pegah Ghahremani, Vimal Manohar, Daniel Povey, Sanjeev Khudanpur:
Acoustic Modelling from the Signal Domain Using CNNs. INTERSPEECH 2016: 3434-3438 - 2015
- [c4]Vijayaditya Peddinti, Guoguo Chen, Vimal Manohar, Tom Ko, Daniel Povey, Sanjeev Khudanpur:
JHU ASpIRE system: Robust LVCSR with TDNNS, iVector adaptation and RNN-LMS. ASRU 2015: 539-546 - [c3]Vimal Manohar, Daniel Povey, Sanjeev Khudanpur:
Semi-supervised maximum mutual information training of deep neural network acoustic models. INTERSPEECH 2015: 2630-2634 - 2014
- [c2]Jan Trmal, Guoguo Chen, Daniel Povey, Sanjeev Khudanpur, Pegah Ghahremani, Xiaohui Zhang, Vimal Manohar, Chunxi Liu, Aren Jansen, Dietrich Klakow, David Yarowsky, Florian Metze:
A keyword search system using open source software. SLT 2014: 530-535 - 2013
- [c1]Vimal Manohar, Srinivas C. Bhargav, Srinivasan Umesh:
Acoustic modeling using transform-based phone-cluster adaptive training. ASRU 2013: 49-54
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:17 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint