Robust Cross-view Gait Identification with evidence: A Discriminant Gait Gan Approach

Machine Learning

Abstract

Our DiGGAN framework outperforms state-of-the-art algorithms significantly on various cross-view gait identification scenarios.

Method

We analysed over 10,000 people using our DiGGAN framework on the world’s largest multi-view OU-MVLP dataset which includes more than 10,000 subjects.

Takeaways

This study made a breakthrough towards reliable cross-view gait recognition at a very large scale with generated evidence for practical applications.

Gait is an important biometric trait for surveillance and forensic applications, which can be used to identify individuals at a large distance through CCTV cameras. However, it is very difficult to develop robust automated gait recognition systems, since gait may be affected by many covariate factors such as clothing, walking surface, walking speed, camera view angle, etc.

Out of them, large view angle was deemed as the most challenging factor since it may alter the overall gait appearance substantially. Recently, some deep learning approaches (such as CNNs) have been employed to extract view-invariant features, and achieved encouraging results on small data sets.

However, they do not scale well to large dataset, and the performance decreases significantly w.r.t. number of subjects, which is impractical to large-scale surveillance applications.

To address this issue, in this work we propose a Discriminant Gait Generative Adversarial Network (DiGGAN) framework, which not only can learn view-invariant gait features for crossview gait recognition tasks, but also can be used to reconstruct the gait templates in all views — serving as important evidences for forensic applications.

We evaluated our DiGGAN framework on the world’s largest multi-view OU-MVLP dataset (which includes more than 10,000 subjects), and our method outperforms state-of-the-art algorithms significantly on various cross-view gait identification scenarios (e.g., cooperative/uncooperative mode).

Our DiGGAN framework also has the best results on the popular CASIA-B dataset, and it shows great generalisation capability across different datasets.

Overall, this paper made a breakthrough towards reliable cross-view gait recognition at a very large scale with generated evidence for practical applications.