Cross- View Gait Recognition Using Non-Linear View Transformations of Spatiotemporal Features

Muhammad Hassan Khan, Muhammad Shahid Farid, Maryiam Zahoor, Marcin Grzegorzek

Abstract

This paper presents a novel cross-view gait recognition technique based on the spatiotemporal characteristics of human motion. We propose a deep fully-connected neural network with unsupervised learning which transfers the gait descriptors from multiple views to the single canonical view. The proposed non-linear network learns a single model for all videos captured from different viewpoints and finds a shared high-level virtual path to map them on a single canonical view. Therefore, the model does not require any labels or viewpoint information in the learning phase. The network is learned only once using the spatiotemporal motion features of the gait sequences from several viewpoints, later it is used to construct the cross-view gait descriptors for the gallery and the probe sets. The descriptors are classified using simple linear support vector machine. Experiments carried out on the benchmark cross-view gait dataset, CASIA-B, and comparisons with the state-of-the-art demonstrate that the proposed method outperforms the existing cross-view gait recognition algorithms.

OriginalspracheDeutsch
Titel25th IEEE International Conference on Image Processing (ICIP) 2018
Herausgeber (Verlag)IEEE
Erscheinungsdatum29.08.2018
Seiten773 - 777
DOIs
PublikationsstatusVeröffentlicht - 29.08.2018
Veranstaltung25th IEEE International Conference on Image Processing (ICIP) 2018 - Athen, Griechenland
Dauer: 07.10.201810.10.2018

Zitieren