A non-linear view transformations model for cross-view gait recognition

Muhammad Hassan Khan*, Muhammad Shahid Farid, Marcin Grzegorzek

*Corresponding author for this work
1 Citation (Scopus)

Abstract

Gait has emerged as an important biometric feature which is capable of identifying individuals at distance without requiring any interaction with the system. Various factors such as clothing, shoes, and walking surface can affect the performance of gait recognition. However, cross-view gait recognition is particularly challenging as the appearance of individual's walk drastically changes with the change in the viewpoint. In this paper, we present a novel view-invariant gait representation for cross-view gait recognition using the spatiotemporal motion characteristics of human walk. The proposed technique trains a deep fully connected neural network to transform the gait descriptors from multiple viewpoints to a single canonical view. It learns a single model for all the videos captured from different viewpoints and finds a shared high-level virtual path to project them on a single canonical view. The proposed deep neural network is learned only once using the spatiotemporal gait representation and applied to testing gait sequences to construct their view-invariant gait descriptors which are used for cross-view gait recognition. The experimental evaluation is carried out on two large benchmark cross-view gait datasets, CASIA-B and OU-ISIR large population, and the results are compared with current state-of-the-art methods. The results show that the proposed algorithm outperforms the state-of-the-art methods in cross-view gait recognition.

Original languageEnglish
JournalNeurocomputing
Volume402
Pages (from-to)100-111
Number of pages12
ISSN0925-2312
DOIs
Publication statusPublished - 18.08.2020

Fingerprint

Dive into the research topics of 'A non-linear view transformations model for cross-view gait recognition'. Together they form a unique fingerprint.

Cite this