Beyond equal-length snippets: How long is sufficient to recognize an audio scene?

Huy Phan*, Oliver Y. Chén, Philipp Koch, Lam Pham, Ian McLoughlin, Alfred Mertins, Maarten De Vos

*Korrespondierende/r Autor/-in für diese Arbeit

Abstract

Due to the variability in characteristics of audio scenes, some scenes can naturally be recognized earlier than others. In this work, rather than using equal-length snippets for all scene categories, as is common in the literature, we study to which temporal extent an audio scene can be reliably recognized given state-of-the-art models. Moreover, as model fusion with deep network ensemble is prevalent in audio scene classification, we further study whether, and if so, when model fusion is necessary for this task. To achieve these goals, we employ two single-network systems relying on a convolutional neural network and a recurrent neural network for classification as well as early fusion and late fusion of these networks. Experimental results on the LITIS-Rouen dataset show that some scenes can be reliably recognized with a few seconds while other scenes require significantly longer durations. In addition, model fusion is shown to be the most beneficial when the signal length is short.

OriginalspracheEnglisch
Seiten1-8
Seitenumfang8
PublikationsstatusVeröffentlicht - 01.06.2019
Veranstaltung2019 AES International Conference on Audio Forensics - Porto, Portugal
Dauer: 18.06.201920.06.2019
Konferenznummer: 150402

Tagung, Konferenz, Kongress

Tagung, Konferenz, Kongress2019 AES International Conference on Audio Forensics
Land/GebietPortugal
OrtPorto
Zeitraum18.06.1920.06.19

Fingerprint

Untersuchen Sie die Forschungsthemen von „Beyond equal-length snippets: How long is sufficient to recognize an audio scene?“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren