Deep convolutional neural networks for predominant instrument recognition in polyphonic music

Y Han, J Kim, K Lee - IEEE/ACM Transactions on Audio …, 2016 - ieeexplore.ieee.org
IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2016ieeexplore.ieee.org
Identifying musical instruments in polyphonic music recordings is a challenging but
important problem in the field of music information retrieval. It enables music search by
instrument, helps recognize musical genres, or can make music transcription easier and
more accurate. In this paper, we present a convolutional neural network framework for
predominant instrument recognition in real-world polyphonic music. We train our network
from fixed-length music excerpts with a single-labeled predominant instrument and estimate …
Identifying musical instruments in polyphonic music recordings is a challenging but important problem in the field of music information retrieval. It enables music search by instrument, helps recognize musical genres, or can make music transcription easier and more accurate. In this paper, we present a convolutional neural network framework for predominant instrument recognition in real-world polyphonic music. We train our network from fixed-length music excerpts with a single-labeled predominant instrument and estimate an arbitrary number of predominant instruments from an audio signal with a variable length. To obtain the audio-excerpt-wise result, we aggregate multiple outputs from sliding windows over the test audio. In doing so, we investigated two different aggregation methods: one takes the class-wise average followed by normalization, and the other perform temporally local class-wise max-pooling on the output probability prior to averaging and normalization steps to minimize the effect of averaging process suppresses the activation of sporadically appearing instruments. In addition, we conducted extensive experiments on several important factors that affect the performance, including analysis window size, identification threshold, and activation functions for neural networks to find the optimal set of parameters. Our analysis on the instrument-wise performance found that the onset type is a critical factor for recall and precision of each instrument. Using a dataset of 10k audio excerpts from 11 instruments for evaluation, we found that convolutional neural networks are more robust than conventional methods that exploit spectral features and source separation with support vector machines. Experimental results showed that the proposed convolutional network architecture obtained an F1 measure of 0.619 for micro and 0.513 for macro, respectively, achieving 23.1% and 18.8% in performance improvement compared with the state-of-the-art algorithm.
ieeexplore.ieee.org