Abstract
This paper presents a methodology for early detection of audio events from audio streams. Early detection is the ability to infer an ongoing event during its initial stage. The proposed system consists of a novel inference step coupled with dual parallel tailored-loss deep neural networks (DNNs). The DNNs share a similar architecture except for their loss functions, i.e. weighted loss and multitask loss, which are designed to efficiently cope with issues common to audio event detection. The inference step is newly introduced to make use of the network outputs for recognizing ongoing events. The monotonicity of the detection function is required for reliable early detection, and will also be proved. Experiments on the ITC-Irst database show that the proposed system achieves state-of-the-art detection performance. Furthermore, even partial events are sufficient to achieve good performance similar to that obtained when an entire event is observed, enabling early event detection.
Original language | English |
---|---|
Title of host publication | 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
Number of pages | 5 |
Volume | 2018-April |
Publisher | IEEE |
Publication date | 01.04.2018 |
Pages | 141-145 |
ISBN (Print) | 978-1-5386-4659-5 |
ISBN (Electronic) | 978-1-5386-4658-8 |
DOIs | |
Publication status | Published - 01.04.2018 |
Event | 2018 IEEE International Conference on Acoustics, Speech, and Signal Processing - Calgary Telus Convention Center, Calgary, Canada Duration: 15.04.2018 → 20.04.2018 |