Representing Nonspeech Audio Signals through Speech Classification Models

Huy Phan, Lars Hertel, Marco Maass, Radoslaw Mazur, Alfred Mertins

Abstract

The human auditory system is very well matched to both human speech and environmental sounds. Therefore, the question arises whether human speech material may provide useful information for training systems for analyzing nonspeech audio signals, such as in a recognition task. To find out how similar nonspeech signals are to speech, we measure the closeness between target nonspeech signals and different basis speech categories via a speech classification model. The speech similarities are finally employed as a descriptor to represent the target signal. We further show that a better descriptor can be obtained by learning to organize the speech categories hierarchically with a tree structure. We conduct experiments for the audio event analysis application by using speech words from the TIMIT dataset to learn the descriptors for the audio events of the Freiburg-106 dataset. Our results on the event recognition task outperform those achieved by the best system even though a simple linear classifier is used. Furthermore, integrating the learned descriptors as an additional source leads to improved performance.
OriginalspracheEnglisch
TitelProc. 16th Annual Conference of the International Speech Communication Association (INTERSPEECH 2015)
Seitenumfang5
ErscheinungsortDresden, Germany
Herausgeber (Verlag)International Speech and Communication Association (ISCA)
Erscheinungsdatum01.09.2015
Seiten3441-3445
PublikationsstatusVeröffentlicht - 01.09.2015
Veranstaltung16th Annual Conference of the International Speech Communication Association - International Congress Center Dresden, Dresden, Deutschland
Dauer: 06.09.201510.09.2015
Konferenznummer: 118697

Fingerprint

Untersuchen Sie die Forschungsthemen von „Representing Nonspeech Audio Signals through Speech Classification Models“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren