The Support Vector Machine: Classification with the Least Number of Features and Application to Neuroimaging Data

Sascha Klement*, Silke Anders, Thomas Martinetz

*Corresponding author for this work
3 Citations (Scopus)

Abstract

By minimizing the zero-norm of the separating hyperplane, the support feature machine (SFM) finds the smallest subspace (the least number of features) of a data set such that within this subspace, two classes are linearly separable without error. This way, the dimensionality of the data is more efficiently reduced than with support vector–based feature selection, which can be shown both theoretically and empirically. In this letter, we first provide a new formulation of the previously introduced concept of the SFM. With this new formulation, classification of unbalanced and nonseparable data is straightforward, which allows using the SFM for feature selection and classification in a large variety of different scenarios. To illustrate how the SFM can be used to identify both the smallest subset of discriminative features and the total number of informative features in biological data sets we apply repetitive feature selection based on the SFM to a functional magnetic resonance imaging data set. We suggest that these capabilities qualify the SFM as a universal method for feature selection, especially for high-dimensional small-sample-size data sets that often occur in biological and medical applications.
Original languageEnglish
JournalNeural Computation
Volume25
Issue number6
Pages (from-to)1548-1584
Number of pages37
ISSN0899-7667
DOIs
Publication statusPublished - 21.05.2013

Fingerprint

Dive into the research topics of 'The Support Vector Machine: Classification with the Least Number of Features and Application to Neuroimaging Data'. Together they form a unique fingerprint.

Cite this