default search action
Lianhong Cai
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2022
- [j16]Alex Noel Joseph Raj, Lianhong Cai, Wei Li, Zhemin Zhuang, Tardi Tjahjadi:
FPGA-based systolic deconvolution architecture for upsampling. PeerJ Comput. Sci. 8: e973 (2022) - [c119]Liwei Bao, Lianhong Cai, Jinghan Liu, Yongwei Wu:
Improving Information Literacy of Engineering Doctorate Based on Team Role Model. ICCSE (3) 2022: 42-50
2010 – 2019
- 2018
- [c118]Jingbei Li, Zhiyong Wu, Runnan Li, Mingxing Xu, Kehua Lei, Lianhong Cai:
Multi-modal Multi-scale Speech Expression Evaluation in Computer-Assisted Language Learning. AIMS 2018: 16-28 - [c117]Runnan Li, Zhiyong Wu, Yuchen Huang, Jia Jia, Helen Meng, Lianhong Cai:
Emphatic Speech Generation with Conditioned Input Layer and Bidirectional LSTMS for Expressive Speech Synthesis. ICASSP 2018: 5129-5133 - [c116]Shaoguang Mao, Zhiyong Wu, Runnan Li, Xu Li, Helen Meng, Lianhong Cai:
Applying Multitask Learning to Acoustic-Phonemic Model for Mispronunciation Detection and Diagnosis in L2 English Speech. ICASSP 2018: 6254-6258 - [c115]Ziwei Zhu, Zhiyong Wu, Runnan Li, Helen Meng, Lianhong Cai:
Siamese Recurrent Auto-Encoder Representation for Query-by-Example Spoken Term Detection. INTERSPEECH 2018: 102-106 - [c114]Xi Ma, Zhiyong Wu, Jia Jia, Mingxing Xu, Helen Meng, Lianhong Cai:
Emotion Recognition from Variable-Length Speech Segments Using Deep Learning on Spectrograms. INTERSPEECH 2018: 3683-3687 - 2017
- [c113]Yu-Hao Wu, Jia Jia, Feng Lu, Lianhong Cai:
A systematic approach to compute perceptual distribution of monosyllables. ICASSP 2017: 1103-1107 - [c112]Runnan Li, Zhiyong Wu, Xunying Liu, Helen M. Meng, Lianhong Cai:
Multi-task learning of structured output layer bidirectional LSTMS for speech synthesis. ICASSP 2017: 5510-5514 - [c111]Yishuang Ning, Zhiyong Wu, Runnan Li, Jia Jia, Mingxing Xu, Helen M. Meng, Lianhong Cai:
Learning cross-lingual knowledge with multilingual BLSTM for emphasis detection with limited training data. ICASSP 2017: 5615-5619 - [c110]Yuchen Huang, Zhiyong Wu, Runnan Li, Helen Meng, Lianhong Cai:
Multi-Task Learning for Prosodic Structure Generation Using BLSTM RNN with Structured Output Layer. INTERSPEECH 2017: 779-783 - [c109]Xi Ma, Zhiyong Wu, Jia Jia, Mingxing Xu, Helen Meng, Lianhong Cai:
Speech Emotion Recognition with Emotion-Pair Based Framework Considering Emotion Distribution Information in Dimensional Emotion Space. INTERSPEECH 2017: 1238-1242 - [c108]Runnan Li, Zhiyong Wu, Yishuang Ning, Lifa Sun, Helen Meng, Lianhong Cai:
Spectro-Temporal Modelling with Time-Frequency LSTM and Structured Output Layer for Voice Conversion. INTERSPEECH 2017: 3409-3413 - [c107]Ye Ma, Xinxing Li, Mingxing Xu, Jia Jia, Lianhong Cai:
Multi-scale Context Based Attention for Dynamic Music Emotion Prediction. ACM Multimedia 2017: 1443-1450 - 2016
- [j15]Quan Guo, Jia Jia, Guangyao Shen, Lei Zhang, Lianhong Cai, Zhang Yi:
Learning robust uniform features for cross-media social data by using cross autoencoders. Knowl. Based Syst. 102: 64-75 (2016) - [c106]Xinxing Li, Haishu Xianyu, Jiashen Tian, Wenxiao Chen, Fanhang Meng, Mingxing Xu, Lianhong Cai:
A deep bidirectional long short-term memory based multi-scale approach for music dynamic emotion prediction. ICASSP 2016: 544-548 - [c105]Haishu Xianyu, Xinxing Li, Wenxiao Chen, Fanhang Meng, Jiashen Tian, Mingxing Xu, Lianhong Cai:
SVR based double-scale regression for dynamic emotion prediction in music. ICASSP 2016: 549-553 - [c104]Quanjie Yu, Peng Liu, Zhiyong Wu, Shiyin Kang, Helen Meng, Lianhong Cai:
Learning cross-lingual information with multilingual BLSTM for speech synthesis of low-resource languages. ICASSP 2016: 5545-5549 - [c103]Xinyu Lan, Xu Li, Yishuang Ning, Zhiyong Wu, Helen Meng, Jia Jia, Lianhong Cai:
Low level descriptors based DBLSTM bottleneck feature for speech driven talking avatar. ICASSP 2016: 5550-5554 - [c102]Yaodong Tang, Yuchen Huang, Zhiyong Wu, Helen Meng, Mingxing Xu, Lianhong Cai:
Question detection from acoustic features using recurrent neural network with gated recurrent unit. ICASSP 2016: 6125-6129 - [c101]Xinxing Li, Jiashen Tian, Mingxing Xu, Yishuang Ning, Lianhong Cai:
DBLSTM-based multi-scale fusion for dynamic emotion prediction in music. ICME 2016: 1-6 - [c100]Linchuan Li, Zhiyong Wu, Mingxing Xu, Helen M. Meng, Lianhong Cai:
Recognizing stances in Mandarin social ideological debates with text and acoustic features. ICME Workshops 2016: 1-6 - [c99]Haishu Xianyu, Mingxing Xu, Zhiyong Wu, Lianhong Cai:
Heterogeneity-entropy based unsupervised feature learning for personality prediction with cross-media data. ICME 2016: 1-6 - [c98]Yaodong Tang, Zhiyong Wu, Helen M. Meng, Mingxing Xu, Lianhong Cai:
Analysis on Gated Recurrent Unit Based Question Detection Approach. INTERSPEECH 2016: 735-739 - [c97]Linchuan Li, Zhiyong Wu, Mingxing Xu, Helen M. Meng, Lianhong Cai:
Combining CNN and BLSTM to Extract Textual and Acoustic Features for Recognizing Stances in Mandarin Ideological Debate Competition. INTERSPEECH 2016: 1392-1396 - [c96]Xu Li, Zhiyong Wu, Helen M. Meng, Jia Jia, Xiaoyan Lou, Lianhong Cai:
Phoneme Embedding and its Application to Speech Driven Talking Avatar Synthesis. INTERSPEECH 2016: 1472-1476 - [c95]Xu Li, Zhiyong Wu, Helen M. Meng, Jia Jia, Xiaoyan Lou, Lianhong Cai:
Expressive Speech Driven Talking Avatar Synthesis with DBLSTM Using Limited Amount of Emotional Bimodal Data. INTERSPEECH 2016: 1477-1481 - [c94]Wai-Kim Leung, Jia Jia, Yu-Hao Wu, Jiayu Long, Lianhong Cai:
THear: Development of a mobile multimodal audiometry application on a cross-platform framework. ISCSLP 2016: 1-5 - [c93]Runnan Li, Zhiyong Wu, Helen M. Meng, Lianhong Cai:
DBLSTM-based multi-task learning for pitch transformation in voice conversion. ISCSLP 2016: 1-5 - [i3]Xi Ma, Zhiyong Wu, Jia Jia, Mingxing Xu, Helen M. Meng, Lianhong Cai:
Study on Feature Subspace of Archetypal Emotions for Speech Emotion Recognition. CoRR abs/1611.05675 (2016) - 2015
- [j14]Zhiyong Wu, Yishuang Ning, Xiao Zang, Jia Jia, Fanbo Meng, Helen Meng, Lianhong Cai:
Generating emphatic speech with hidden Markov model for expressive speech synthesis. Multim. Tools Appl. 74(22): 9909-9925 (2015) - [j13]Xiaohui Wang, Jia Jia, Jie Tang, Boya Wu, Lianhong Cai, Lexing Xie:
Modeling Emotion Influence in Image Social Networks. IEEE Trans. Affect. Comput. 6(3): 286-297 (2015) - [c92]Xixin Wu, Zhiyong Wu, Yishuang Ning, Jia Jia, Lianhong Cai, Helen M. Meng:
Understanding speaking styles of internet speech data with LSTM and low-resource training. ACII 2015: 815-820 - [c91]Peng Liu, Quanjie Yu, Zhiyong Wu, Shiyin Kang, Helen M. Meng, Lianhong Cai:
A deep recurrent approach for acoustic-to-articulatory inversion. ICASSP 2015: 4450-4454 - [c90]Yishuang Ning, Zhiyong Wu, Jia Jia, Fanbo Meng, Helen M. Meng, Lianhong Cai:
HMM-based emphatic speech synthesis for corrective feedback in computer-aided pronunciation training. ICASSP 2015: 4934-4938 - [c89]Yu-Hao Wu, Jia Jia, Wai-Kim Leung, Yejun Liu, Lianhong Cai:
MPHA: A Personal Hearing Doctor Based on Mobile Devices. ICMI 2015: 155-162 - [c88]Yishuang Ning, Zhiyong Wu, Xiaoyan Lou, Helen M. Meng, Jia Jia, Lianhong Cai:
Using tilt for automatic emphasis detection with Bayesian networks. INTERSPEECH 2015: 578-582 - 2014
- [j12]Xiaolan Fu, Lianhong Cai, Ye Liu, Jia Jia, Wenfeng Chen, Zhang Yi, Guozhen Zhao, Yong-Jin Liu, Changxu Wu:
A computational cognition model of perception, memory, and judgment. Sci. China Inf. Sci. 57(3): 1-15 (2014) - [j11]Jia Jia, Wai-Kim Leung, Yu-Hao Wu, Xiu-Long Zhang, Hao Wang, Lianhong Cai, Helen M. Meng:
Grading the Severity of Mispronunciations in CAPT Based on Statistical Analysis and Computational Speech Perception. J. Comput. Sci. Technol. 29(5): 751-761 (2014) - [j10]Jia Jia, Zhiyong Wu, Shen Zhang, Helen M. Meng, Lianhong Cai:
Head and facial gestures synthesis using PAD model for an expressive talking avatar. Multim. Tools Appl. 73(1): 439-461 (2014) - [j9]Fanbo Meng, Zhiyong Wu, Jia Jia, Helen M. Meng, Lianhong Cai:
Synthesizing English emphatic speech for multimodal corrective feedback in computer-aided pronunciation training. Multim. Tools Appl. 73(1): 463-489 (2014) - [c87]Yuchao Fan, Mingxing Xu, Zhiyong Wu, Lianhong Cai:
Automatic Emotion Variation Detection in continuous speech. APSIPA 2014: 1-5 - [c86]Xin Zheng, Zhiyong Wu, Helen Meng, Lianhong Cai:
Learning dynamic features with neural networks for phoneme recognition. ICASSP 2014: 2524-2528 - [c85]Xin Zheng, Zhiyong Wu, Helen Meng, Lianhong Cai:
Contrastive auto-encoder for phoneme recognition. ICASSP 2014: 2529-2533 - [c84]Huijie Lin, Jia Jia, Quan Guo, Yuanyuan Xue, Jie Huang, Lianhong Cai, Ling Feng:
Psychological stress detection from cross-media microblog data using Deep Sparse Neural Network. ICME 2014: 1-6 - [c83]Zhu Ren, Jia Jia, Quan Guo, Kuo Zhang, Lianhong Cai:
Acoustics, content and geo-information based sentiment prediction from large-scale networked voice data. ICME 2014: 1-4 - [c82]Yuchen Liu, Mingxing Xu, Lianhong Cai:
Improved keyword spotting system by optimizing posterior confidence measure vector using feed-forward neural network. IJCNN 2014: 2036-2041 - [c81]Xiao Zang, Zhiyong Wu, Helen M. Meng, Jia Jia, Lianhong Cai:
Using conditional random fields to predict focus word pair in spontaneous spoken English. INTERSPEECH 2014: 756-760 - [c80]Xixin Wu, Zhiyong Wu, Jia Jia, Helen M. Meng, Lianhong Cai, Weifeng Li:
Automatic speech data clustering with human perception based weighted distance. ISCSLP 2014: 216-220 - [c79]Yu-Hao Wu, Jia Jia, Xiu-Long Zhang, Lianhong Cai:
Algorithm of pure tone audiometry based on multiple judgment. ISCSLP 2014: 398 - [c78]Huijie Lin, Jia Jia, Quan Guo, Yuanyuan Xue, Qi Li, Jie Huang, Lianhong Cai, Ling Feng:
User-level psychological stress detection from social media using deep neural network. ACM Multimedia 2014: 507-516 - [c77]Zhu Ren, Jia Jia, Lianhong Cai, Kuo Zhang, Jie Tang:
Learning to Infer Public Emotions from Large-Scale Networked Voice Data. MMM (1) 2014: 327-339 - [c76]Boya Wu, Jia Jia, Xiaohui Wang, Yang Yang, Lianhong Cai:
Inferring Emotions from Social Images Leveraging Influence Analysis. SMP 2014: 141-154 - [i2]Xiaohui Wang, Jia Jia, Lianhong Cai, Jie Tang:
Modeling Emotion Influence from Images in Social Networks. CoRR abs/1401.4276 (2014) - 2013
- [j8]Xiaohui Wang, Jia Jia, Lianhong Cai:
Affective image adjustment with a single word. Vis. Comput. 29(11): 1121-1133 (2013) - [c75]Jianbo Jiang, Zhiyong Wu, Mingxing Xu, Jia Jia, Lianhong Cai:
Comparing feature dimension reduction algorithms for GMM-SVM based speech emotion recognition. APSIPA 2013: 1-4 - [c74]Huijie Lin, Jia Jia, Xiangjin Wu, Lianhong Cai:
TalkingAndroid: An interactive, multimodal and real-time talking avatar application on mobile phones. APSIPA 2013: 1-4 - [c73]Kai Zhao, Zhiyong Wu, Lianhong Cai:
A real-time speech driven talking avatar based on deep neural network. APSIPA 2013: 1-4 - [c72]Xin Zheng, Zhiyong Wu, Binbin Shen, Helen M. Meng, Lianhong Cai:
Investigation of tandem deep belief network approach for phoneme recognition. ICASSP 2013: 7586-7590 - [c71]Xiaohui Wang, Jia Jia, Jiaming Yin, Lianhong Cai:
Interpretable aesthetic features for affective image classification. ICIP 2013: 3230-3234 - [c70]Xiaoqing Liu, Jia Jia, Lianhong Cai:
SNR estimation for clipped audio based on amplitude distribution. ICNC 2013: 1434-1438 - [c69]Huijie Lin, Jia Jia, Hanyu Liao, Lianhong Cai:
WeCard: a multimodal solution for making personalized electronic greeting cards. ACM Multimedia 2013: 479-480 - [i1]Xin Zheng, Zhiyong Wu, Helen M. Meng, Weifeng Li, Lianhong Cai:
Feature Learning with Gaussian Restricted Boltzmann Machine for Robust Speech Recognition. CoRR abs/1309.6176 (2013) - 2012
- [j7]Xiaohui Wang, Jia Jia, Hanyu Liao, Lianhong Cai:
Affective Image Colorization. J. Comput. Sci. Technol. 27(6): 1119-1128 (2012) - [c68]Jia Jia, Xiaohui Wang, Zhiyong Wu, Lianhong Cai, Helen M. Meng:
Modeling the correlation between modality semantics and facial expressions. APSIPA 2012: 1-10 - [c67]Xiaohui Wang, Jia Jia, Hanyu Liao, Lianhong Cai:
Image Colorization with an Affective Word. CVM 2012: 51-58 - [c66]Jia Jia, Yongxin Wang, Zhu Ren, Lianhong Cai:
Intention understanding based on multi-source information integration for Chinese Mandarin spoken commands. FSKD 2012: 1834-1838 - [c65]Fanbo Meng, Zhiyong Wu, Helen M. Meng, Jia Jia, Lianhong Cai:
Hierarchical English Emphatic Speech Synthesis Based on HMM with Limited Training Data. INTERSPEECH 2012: 466-469 - [c64]Tao Jiang, Zhiyong Wu, Jia Jia, Lianhong Cai:
Perceptual clustering based unit selection optimization for concatenative text-to-speech synthesis. ISCSLP 2012: 64-68 - [c63]Chunrong Li, Zhiyong Wu, Fanbo Meng, Helen M. Meng, Lianhong Cai:
Detection and emphatic realization of contrastive word pairs for expressive text-to-speech synthesis. ISCSLP 2012: 93-97 - [c62]Jia Jia, Wai-Kim Leung, Ye Tian, Lianhong Cai, Helen M. Meng:
Analysis on mispronunciations in CAPT based on computational speech perception. ISCSLP 2012: 174-178 - [c61]Xixin Wu, Zhiyong Wu, Jia Jia, Lianhong Cai:
Adaptive named entity recognition based on conditional random fields with automatic updated dynamic gazetteers. ISCSLP 2012: 363-367 - [c60]Ye Tian, Jia Jia, Yongxin Wang, Lianhong Cai:
A real-time tone enhancement method for continuous Mandarin speeches. ISCSLP 2012: 405-408 - [c59]Jia Jia, Sen Wu, Xiaohui Wang, Peiyun Hu, Lianhong Cai, Jie Tang:
Can we understand van gogh's mood?: learning to infer affects from images in social networks. ACM Multimedia 2012: 857-860 - [c58]Xiaohui Wang, Jia Jia, Peiyun Hu, Sen Wu, Jie Tang, Lianhong Cai:
Understanding the emotional impact of images. ACM Multimedia 2012: 1369-1370 - [c57]Jianbo Jiang, Zhiyong Wu, Mingxing Xu, Jia Jia, Lianhong Cai:
Comparison of adaptation methods for GMM-SVM based speech emotion recognition. SLT 2012: 269-273 - 2011
- [j6]Jia Jia, Shen Zhang, Fanbo Meng, Yongxin Wang, Lianhong Cai:
Emotional Audio-Visual Speech Synthesis Based on PAD. IEEE Trans. Speech Audio Process. 19(3): 570-582 (2011) - [c56]Jinlong Li, Hongwu Yang, Weizhao Zhang, Lianhong Cai:
A Lyrics to Singing Voice Synthesis System with Variable Timbre. ICAIC (2) 2011: 186-193 - [c55]Binbin Shen, Zhiyong Wu, Yongxin Wang, Lianhong Cai:
Combining Active and Semi-Supervised Learning for Homograph Disambiguation in Mandarin Text-to-Speech Synthesis. INTERSPEECH 2011: 2165-2168 - 2010
- [c54]Yuxiang Liu, Roger B. Dannenberg, Lianhong Cai:
The Intelligent Music Editor: Towards an Automated Platform for Music Analysis and Editing. ICIC (2) 2010: 123-131 - [c53]Jia Jia, Shen Zhang, Lianhong Cai:
Facial expression synthesis based on motion patterns learned from face database. ICIP 2010: 3973-3976 - [c52]Shen Zhang, Jia Jia, Yingjin Xu, Lianhong Cai:
Emotional talking agent: System and evaluation. ICNC 2010: 3573-3577 - [c51]Quansheng Duan, Shiyin Kang, Zhiyong Wu, Lianhong Cai, Zhiwei Shuang, Yong Qin:
Comparison of Syllable/Phone HMM Based Mandarin TTS. ICPR 2010: 4496-4499 - [c50]Zhiwei Shuang, Shiyin Kang, Yong Qin, Li-Rong Dai, Lianhong Cai:
HMM based TTS for mixed language text. INTERSPEECH 2010: 618-621 - [c49]Zhiyong Wu, Lianhong Cai, Helen M. Meng:
Modeling prosody patterns for Chinese expressive text-to-speech synthesis. ISCSLP 2010: 148-152 - [c48]Yongxin Wang, Jianwu Dang, Lianhong Cai:
Investigation of the relation between acoustic features and articulation - An application to emotional speech analysis. ISCSLP 2010: 326-329 - [p1]Shen Zhang, Zhiyong Wu, Helen M. Meng, Lianhong Cai:
Facial Expression Synthesis Based on Emotion Dimensions for Affective Talking Avatar. Modeling Machine Emotions for Realizing Intelligence 2010: 109-132
2000 – 2009
- 2009
- [j5]Zhiyong Wu, Helen M. Meng, Hongwu Yang, Lianhong Cai:
Modeling the Expressivity of Input Text Semantics for Chinese Text-to-Speech Synthesis in a Spoken Dialog System. IEEE Trans. Speech Audio Process. 17(8): 1567-1576 (2009) - [c47]Yuxiang Liu, Qiaoliang Xiang, Ye Wang, Lianhong Cai:
Cultural style based music classification of audio signals. ICASSP 2009: 57-60 - [c46]Jun Xu, Lianhong Cai:
Automatic Emphasis Labeling for Emotional Speech by Measuring Prosody Generation Error. ICIC (1) 2009: 177-186 - [c45]Shiyin Kang, Zhiwei Shuang, Quansheng Duan, Yong Qin, Lianhong Cai:
Voiced/unvoiced decision algorithm for HMM-based speech synthesis. INTERSPEECH 2009: 412-415 - [c44]Zhiwei Shuang, Shiyin Kang, Qin Shi, Yong Qin, Lianhong Cai:
Syllable HMM based Mandarin TTS and comparison with concatenative TTS. INTERSPEECH 2009: 1767-1770 - 2008
- [c43]Xiangcheng Wang, Ying Liu, Lianhong Cai:
Entering Tone Recognition in a Support Vector Machine Approach. ICNC (2) 2008: 61-65 - [c42]Honglei Cong, Zhiyong Wu, Lianhong Cai, Helen M. Meng:
A New Prosodic Strength Calculation Method for Prosody Reduction Modeling. ISCSLP 2008: 53-56 - [c41]Shen Zhang, Yingjin Xu, Jia Jia, Lianhong Cai:
Analysis and Modeling of Affective Audio Visual Speech Based on PAD Emotion Space. ISCSLP 2008: 281-284 - [c40]Yuxiang Liu, Ye Wang, Arun Shenoy, Wei-Ho Tsai, Lianhong Cai:
Clustering Music Recordings by Their Keys. ISMIR 2008: 319-324 - 2007
- [j4]Jia Jia, Lianhong Cai, Pinyan Lu, Xuhui Liu:
Fingerprint matching based on weighting method and the SVM. Neurocomputing 70(4-6): 849-858 (2007) - [c39]Shen Zhang, Zhiyong Wu, Helen M. Meng, Lianhong Cai:
Facial Expression Synthesis Using PAD Emotional Parameters for a Chinese Expressive Avatar. ACII 2007: 24-35 - [c38]Dandan Cui, Fanbo Meng, Lianhong Cai, Liuyi Sun:
Affect Related Acoustic Features of Speech and Their Modification. ACII 2007: 776-777 - [c37]Dandan Cui, Denzhi Huang, Yuan Dong, Lianhong Cai, Haila Wang:
Script Design Based on Decision Tree with Context Vector and Acoustic Distance for Mandarin TTS. ICASSP (4) 2007: 713-716 - [c36]Shen Zhang, Zhiyong Wu, Helen M. Meng, Lianhong Cai:
Head Movement Synthesis Based on Semantic and Prosodic Features for a Chinese Expressive Avatar. ICASSP (4) 2007: 837-840 - [c35]Jia Jia, Lianhong Cai, Kaifu Zhang, Dawei Chen:
A New Approach to Fake Finger Detection Based on Skin Elasticity Analysis. ICB 2007: 309-318 - [c34]Jia Jia, Lianhong Cai:
Fake Finger Detection Based on Time-Series Fingerprint Image Analysis. ICIC (1) 2007: 1140-1150 - [c33]Jun Xu, Dezhi Huang, Yongxin Wang, Yuan Dong, Lianhong Cai, Haila Wang:
Hierarchical non-uniform unit selection based on prosodic structure. INTERSPEECH 2007: 2861-2864 - 2006
- [j3]Hongwu Yang, Dezhi Huang, Lianhong Cai:
Perceptually Weighted Mel-Cepstrum Analysis of Speech Based on Psychoacoustic Model. IEICE Trans. Inf. Syst. 89-D(12): 2998-3001 (2006) - [j2]Rui Cai, Lie Lu, Alan Hanjalic, HongJiang Zhang, Lian-Hong Cai:
A flexible framework for key audio effects detection and auditory context inference. IEEE Trans. Speech Audio Process. 14(3): 1026-1039 (2006) - [c32]Zhiyong Wu, Lianhong Cai, Helen M. Meng:
Multi-level Fusion of Audio and Visual Features for Speaker Identification. ICB 2006: 493-499 - [c31]Dandan Cui, Lianhong Cai:
Acoustic and Physiological Feature Analysis of Affective Speech. ICIC (2) 2006: 912-917 - [c30]Zhiyong Wu, Shen Zhang, Lianhong Cai, Helen M. Meng:
Real-time synthesis of Chinese visual speech and facial expressions using MPEG-4 FAP features in a three-dimensional avatar. INTERSPEECH 2006 - [c29]Hongwu Yang, Helen M. Meng, Lianhong Cai:
Modeling the acoustic correlates of expressive elements in text genres for expressive text-to-speech synthesis. INTERSPEECH 2006 - [c28]Dandan Cui, Lianhong Cai, Yongxin Wang, Xiaozhou Zhang:
Investigation on Pleasure Related Acoustic Features of Affective Speech. ISCSLP 2006 - [c27]Jun Xu, Lianhong Cai:
Spectral Continuity Measures at Mandarin Syllable Boundaries. ISCSLP 2006 - [c26]Xiaonan Zhang, Jun Xu, Lianhong Cai:
Prosodic Boundary Prediction Based on Maximum Entropy Model with Error-Driven Modification. ISCSLP (Selected Papers) 2006: 149-160 - [c25]Hongwu Yang, Helen M. Meng, Zhiyong Wu, Lianhong Cai:
Modelling the Global acoustic Correlates of Expressivity for Chinese Text-to-speech Synthesis. SLT 2006: 138-141 - 2005
- [c24]Min Zheng, Qin Shi, Wei Zhang, Lianhong Cai:
Grapheme-to-Phoneme Conversion Based on a Fast TBL Algorithm in Mandarin TTS Systems. FSKD (2) 2005: 600-609 - [c23]Dan-Ning Jiang, Wei Zhang, Liqin Shen, Lianhong Cai:
Prosody Analysis and Modeling for Emotional Speech Synthesis. ICASSP (1) 2005: 281-284 - [c22]Rui Cai, Lie Lu, Lian-Hong Cai:
Unsupervised auditory scene categorization via key audio effects and information-theoretic co-clustering. ICASSP (2) 2005: 1073-1076 - [c21]Min Zheng, Qin Shi, Wei Zhang, Lianhong Cai:
Grapheme-to-phoneme conversion based on TBL algorithm in Mandarin TTS system. INTERSPEECH 2005: 1897-1900 - [c20]Jia Jia, Lianhong Cai:
A TSVM-Based Minutiae Matching Approach for Fingerprint Verification. IWBRS 2005: 85-94 - 2004
- [c19]Rui Cai, Lie Lu, Hong-Jiang Zhang, Lian-Hong Cai:
Improve audio representation by using feature structure patterns. ICASSP (4) 2004: 345-348 - [c18]Dan-Ning Jiang, Lian-Hong Cai:
Speech emotion classification with the combination of statistic features and temporal features. ICME 2004: 1967-1970 - [c17]Zhiguang Yang, Haizhou Ai, Bo Wu, Shihong Lao, Lianhong Cai:
Face Pose Estimation and its Application in Video Shot Selection. ICPR (1) 2004: 322-325 - [c16]Dan-Ning Jiang, Lian-Hong Cai:
Classifying emotion in Chinese speech by decomposing prosodic features. INTERSPEECH 2004: 1325-1328 - 2003
- [j1]Wei Wang, Lianhong Cai:
Approach to the Correlation Discovery of Chinese Linguistic Parameters Based on Bayesian Method. J. Comput. Sci. Technol. 18(1): 97-101 (2003) - [c15]Rui Cai, Lie Lu, Hong-Jiang Zhang, Lian-Hong Cai:
Highlight sound effects detection in audio stream. ICME 2003: 37-40 - [c14]Liang Ma, Qunxiu Chen, Lianhong Cai:
An adaptive system for online document filtering. SMC 2003: 4712-4717 - [c13]Liang Ma, Qunxiu Chen, Lianhong Cai:
An Improved Framework for Online Adaptive Information Filtering. WAIM 2003: 409-420 - 2002
- [c12]Sheng Zhao, Jianhua Tao, Lianhong Cai:
Learning Rules for Chinese Prosodic Phrase Prediction. SIGHAN@COLING 2002 - [c11]Dan-Ning Jiang, Lie Lu, Hong-Jiang Zhang, Jianhua Tao, Lian-Hong Cai:
Music type classification by spectral contrast feature. ICME (1) 2002: 113-116 - [c10]Jianhua Tao, Lianhong Cai:
Clustering and feature learning based F0 prediction for Chinese speech synthesis. INTERSPEECH 2002: 2097-2100 - [c9]Sheng Zhao, Jianhua Tao, Lianhong Cai:
Prosodic phrasing with inductive learning. INTERSPEECH 2002: 2417-2420 - [c8]Jianhua Tao, Sheng Zhao, Lian-Hong Cai:
Automatic stress prediction of Chinese speech synthesis. ISCSLP 2002 - [c7]Rui Cai, Zhi-Yong Wu, Lian-Hong Cai:
Annotation of Chinese prosodic level based on probabilistic model. ISCSLP 2002 - [c6]Dan-Ning Jiang, Jianhua Tao, Lian-Hong Cai:
Voice quality analysis under the pitch effect. ISCSLP 2002 - [c5]Liang Ma, Qunxiu Chen, Shaoping Ma, Min Zhang, Lianhong Cai:
Incremental Learning for Profile Training in Adaptive Document Filtering. TREC 2002 - 2000
- [c4]Muhua Lv, Lianhong Cai:
The design and application of a speech database for Chinese TTS system. INTERSPEECH 2000: 378-381 - [c3]Zhiyong Wu, Lianhong Cai, Tongchun Zhou:
Research on dynamic characters of Chinese pitch contours. INTERSPEECH 2000: 686-689
1990 – 1999
- 1998
- [c2]Jianhua Tao, Lian-Hong Cai, Yu-Zuo Zhong:
The Statistical Model of Chinese Word Contours Based on Fuzzy. ISCSLP 1998
1980 – 1989
- 1987
- [c1]Xue-Dong Huang, Lian-Hong Cai, Ditang Fang, Bian-Jin Ci, Li Zhou, Li Jian:
A large-vocabulary Chinese speech recognition system. ICASSP 1987: 1167-1170
Coauthor Index
aka: Helen Meng
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-06 21:30 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint