We present in this work an approach for audio scene classification. Firstly, given the label set of the scenes, a label tree is automatically constructed where the labels are grouped into meta-classes. This category taxonomy is then used in the feature extraction step in which an audio scene instance is transformed into a label tree embedding image. Elements of the image indicate the likelihoods that the scene instances belong to different meta-classes. A class of simple 1-X (i.e. 1-max, 1-mean, and 1-mix) pooling convolutional neural networks, which are tailored for the task at hand, are finally learned on top of the image features for scene recognition. Experimental results on the DCASE 2013 and DCASE 2016 datasets demonstrate the efficiency of the proposed method.
|Title of host publication||2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)|
|Number of pages||5|
|Place of Publication||New Orleans|
|Publication status||Published - 01.03.2017|
|Event||2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) - New Orleans, United States|
Duration: 05.03.2017 → 09.03.2017