Nothing Special   »   [go: up one dir, main page]

×
Please click here if you are not redirected within a few seconds.
We employ a variant of the popular Adaboost algorithm to train multiple acoustic models such that the aggregate system ex- hibits improved performance over ...
We employ a variant of the popular Adaboost algorithm to train multiple acoustic models such that the aggregate system exhibits improved performance.
It is suggested thatSignificant gains can be obtained for small amounts of training data even after feature and model-space discriminative training after ...
May 4, 2010 · Boosting is a popular machine learning technique for incrementally building linear combinations of “weak” models to generate an arbitrarily “ ...
In this paper, we apply boosting to the problem of frame-level phone classification, and use the resulting system to perform voicemail transcription.
This paper describes our work on applying ensembles of acoustic models to the problem of large vocabulary continuous speech recognition (LVCSR).
ABSTRACT. In this paper, we apply boosting to the problem of frame-level phone classi cation, and use the resulting system to perform voicemail ...
Mar 5, 2012 · This paper propose a variant of AnyBoost for a large vocabulary continuous speech recognition (LVCSR) task.
We employ a variant of the popular Adaboost algorithm to train multiple acoustic models such that the aggregate system exhibits improved performance over ...
This is an adaptation of the classifier combination technique called boosting and has been shown to be superior to bagging for LVCSR [29] . In what follows, we ...