Abstract
Recently, the so-called Support Feature Machine (SFM) was proposed as a novel approach to feature selection for classification. It relies on approximating the zero-norm minimising weight vector of a separating hyperplane by optimising for its one-norm. In contrast to the L1-SVM it uses an additional constraint based on the average of data points. In experiments on artificial datasets we observe that the SFM is highly superior in returning a lower number of features and a larger percentage of truly relevant features. Here, we derive a necessary condition that the zero-norm and 1-norm solution coincide. Based on this condition the superiority can be made plausible.
Original language | English |
---|---|
Title of host publication | Artificial Neural Networks and Machine Learning – ICANN 2011 |
Editors | Timo Honkela, Włodzisław Duch, Mark Girolami, Samuel Kaski |
Number of pages | 8 |
Volume | 6791 |
Publisher | Springer Berlin Heidelberg |
Publication date | 2011 |
Pages | 315-322 |
ISBN (Print) | 978-3-642-21734-0 |
ISBN (Electronic) | 978-3-642-21735-7 |
DOIs | |
Publication status | Published - 2011 |
Event | 21st International Conference on Artificial Neural Networks - Espoo, Finland Duration: 14.06.2011 → 17.06.2011 |