Sparse gammatone signal model optimized for English speech does not match the human auditory filters

Stefan Strahl*, Alfred Mertins

*Corresponding author for this work
7 Citations (Scopus)

Abstract

Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

Original languageEnglish
JournalBrain Research
Volume1220
Pages (from-to)224-233
Number of pages10
ISSN0006-8993
DOIs
Publication statusPublished - 18.07.2008

Fingerprint

Dive into the research topics of 'Sparse gammatone signal model optimized for English speech does not match the human auditory filters'. Together they form a unique fingerprint.

Cite this