Multi-view Audio and Music Classification

Huy Phan*, Huy Le Nguyen, Oliver Y. Chén, Lam Pham, Philipp Koch, Ian McLoughlin, Alfred Mertins

*Corresponding author for this work

Abstract

We propose in this work a multi-view learning approach for audio and music classification. Considering four typical low-level representations (i.e. different views) commonly used for audio and music recognition tasks, the proposed multi-view network consists of four subnetworks, each handling one input types. The learned embedding in the subnetworks are then concatenated to form the multi-view embedding for classification similar to a simple concatenation network. However, apart from the joint classification branch, the network also maintains four classification branches on the single-view embedding of the subnetworks. A novel method is then proposed to keep track of the learning behavior on the classification branches and adapt their weights to proportionally blend their gradients for network training. The weights are adapted in such a way that learning on a branch that is generalizing well will be encouraged whereas learning on a branch that is overfitting will be slowed down. Experiments on three different audio and music classification tasks show that the proposed multi-view network not only outperforms the single-view baselines but also is superior to the multi-view baselines based on concatenation and late fusion.

Original languageEnglish
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Pages (from-to)611 - 615
ISSN1520-6149
DOIs
Publication statusPublished - 2021

Fingerprint

Dive into the research topics of 'Multi-view Audio and Music Classification'. Together they form a unique fingerprint.

Cite this