Abstract
We propose a multi-label multi-task framework based on a convolutional recurrent neural network to unify detection of isolated and overlapping audio events. The framework leverages the power of convolutional recurrent neural network architectures; convolutional layers learn effective features over which higher recurrent layers perform sequential modelling. Furthermore, the output layer is designed to handle arbitrary degrees of event overlap. At each time step in the recurrent output sequence, an output triple is dedicated to each event category of interest to jointly model event occurrence and temporal boundaries. That is, the network jointly determines whether an event of this category occurs, and when it occurs, by estimating onset and offset positions at each recurrent time step. We then introduce three sequential losses for network training: multi-label classification loss, distance estimation loss, and confidence loss. We demonstrate good generalization on two datasets: ITC-Irst for isolated audio event detection, and TUT-SED-Synthetic-2016 for overlapping audio event detection.
Originalsprache | Englisch |
---|---|
Titel | ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
Seitenumfang | 5 |
Herausgeber (Verlag) | IEEE |
Erscheinungsdatum | 05.2019 |
Seiten | 51-55 |
Aufsatznummer | 8683064 |
ISBN (Print) | 978-1-4799-8132-8 |
ISBN (elektronisch) | 978-1-4799-8131-1 |
DOIs | |
Publikationsstatus | Veröffentlicht - 05.2019 |
Veranstaltung | 44th IEEE International Conference on Acoustics, Speech, and Signal Processing - Brighton Conference Centre, Brighton, Großbritannien / Vereinigtes Königreich Dauer: 12.05.2019 → 17.05.2019 Konferenznummer: 149034 |