Skip to main navigation Skip to search Skip to main content

Gait transformation network for gait de-identification with pose preservation

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Gait and face are the two major biometric features that need to be de-identified to preserve the privacy of an individual and conceal his/her identity. Unlike face de-identification, no research has been conducted on gait de-identification to date. A few existing body/silhouette de-identification approaches use blurring and other primitive image processing techniques that are not robust to varying input environments and tend to remove non-biometric features like appearance and activity. In this paper, we propose a plausible deep learning-based solution to the gait de-identification problem. First, a set of key walking poses is determined from a large gallery set. Next, given an input sequence, a graph-based path search algorithm is employed to classify each frame of the sequence into the appropriate key pose. Next, a random frame with matched key pose chosen from the subset of the gallery sequences is considered the target frame. The dense pose features of the input and target frames are then fused using our proposed gait transformation network (GTNet), which is trained using a combination of perceptual loss, L1 loss, and adversarial loss. Training and testing of the model have been conducted using the RGB sequences present in the TUM-GAID and CASIA-B gait data. Promising de-identification results are obtained both qualitatively and quantitatively.
Original languageEnglish
Pages (from-to)1753-1761
Number of pages9
JournalSignal, Image and Video Processing
Volume17
Issue number5
DOIs
StatePublished - Jul 1 2023

Keywords

  • CGAN
  • GTNet
  • Gait de-identification
  • Key pose mapping
  • Pose and appearance preservation

Cite this