Abstract
This paper presents a novel cross-view gait recognition technique based on the spatiotemporal characteristics of human motion. We propose a deep fully-connected neural network with unsupervised learning which transfers the gait descriptors from multiple views to the single canonical view. The proposed non-linear network learns a single model for all videos captured from different viewpoints and finds a shared high-level virtual path to map them on a single canonical view. Therefore, the model does not require any labels or viewpoint information in the learning phase. The network is learned only once using the spatiotemporal motion features of the gait sequences from several viewpoints, later it is used to construct the cross-view gait descriptors for the gallery and the probe sets. The descriptors are classified using simple linear support vector machine. Experiments carried out on the benchmark cross-view gait dataset, CASIA-B, and comparisons with the state-of-the-art demonstrate that the proposed method outperforms the existing cross-view gait recognition algorithms.
Originalsprache | Deutsch |
---|---|
Titel | 25th IEEE International Conference on Image Processing (ICIP) 2018 |
Herausgeber (Verlag) | IEEE |
Erscheinungsdatum | 29.08.2018 |
Seiten | 773 - 777 |
DOIs | |
Publikationsstatus | Veröffentlicht - 29.08.2018 |
Veranstaltung | 25th IEEE International Conference on Image Processing (ICIP) 2018 - Athen, Griechenland Dauer: 07.10.2018 → 10.10.2018 |