Cross-Modal Music-Emotion Retrieval Using DeepCCA

Naoki Takashima*, Frédéric Li, Kimiaki Shirahama, Marcin Grzegorzek

*Korrespondierende/r Autor/-in für diese Arbeit

Abstract

Music-emotion retrieval is important for treatments of mood disorders and depression based on music. Although existing approaches only investigate one-way retrieval from music to emotions, an approach from emotion to music might be needed for proper application of music-based therapies. This can be achieved by sensor-based music retrieval which firstly recognises a specific emotion based on physiological data acquired by wearable sensors and then identifies music suitable for that emotion. In this paper, we propose Cross-modal Music-emotion Retrieval (CMR) as the first step to achieve retrieval of music based on sensor data. Our approach uses Deep Canonical Correlation Analysis (DeepCCA) which projects music samples and their associated emotion sequences into a common space using deep neural networks, and maximises the correlation between projected music samples and emotion sequences using CCA. Our experiments show the superiority of our approach for CMR compared to one-way retrieval.
OriginalspracheEnglisch
TitelInformation Technology in Biomedicine
Seitenumfang12
Band1186
Herausgeber (Verlag)Springer Verlag
Erscheinungsdatum03.09.2020
Seiten133-145
DOIs
PublikationsstatusVeröffentlicht - 03.09.2020

Fingerprint

Untersuchen Sie die Forschungsthemen von „Cross-Modal Music-Emotion Retrieval Using DeepCCA“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren