Small energy masking for improved neural network training for end-to-end speech recognition
C Kim, K Kim, SR Indurthi - ICASSP 2020-2020 IEEE …, 2020 - ieeexplore.ieee.org
C Kim, K Kim, SR Indurthi
ICASSP 2020-2020 IEEE International Conference on Acoustics …, 2020•ieeexplore.ieee.orgIn this paper, we present a Small Energy Masking (SEM) algorithm, which masks inputs
having values below a certain threshold. More specifically, a time-frequency bin is masked if
the filterbank energy in this bin is less than a certain energy threshold. A uniform distribution
is employed to randomly generate the ratio of this energy threshold to the peak filterbank
energy of each utterance in decibels. The unmasked feature elements are scaled so that the
total sum of the feature values remain the same through this masking procedure. This very …
having values below a certain threshold. More specifically, a time-frequency bin is masked if
the filterbank energy in this bin is less than a certain energy threshold. A uniform distribution
is employed to randomly generate the ratio of this energy threshold to the peak filterbank
energy of each utterance in decibels. The unmasked feature elements are scaled so that the
total sum of the feature values remain the same through this masking procedure. This very …
In this paper, we present a Small Energy Masking (SEM) algorithm, which masks inputs having values below a certain threshold. More specifically, a time-frequency bin is masked if the filterbank energy in this bin is less than a certain energy threshold. A uniform distribution is employed to randomly generate the ratio of this energy threshold to the peak filterbank energy of each utterance in decibels. The unmasked feature elements are scaled so that the total sum of the feature values remain the same through this masking procedure. This very simple algorithm shows relatively 11.2% and 13.5% Word Error Rate (WER) improvements on the standard LibriSpeech test-clean and test-other sets over the baseline end-to-end speech recognition system. Additionally, compared to the input dropout algorithm, SEM algorithm shows relatively 7.7% and 11.6% improvements on the same LibriSpeech test-clean and test-other sets. With a modified shallow-fusion technique with a Transformer LM, we obtained a 2.62% WER on the Lib-riSpeech test-clean set and a 7.87% WER on the LibriSpeech test-other set.
ieeexplore.ieee.org