TY - JOUR
T1 - Gait transformation network for gait de-identification with pose preservation
AU - Halder, Agrya
AU - Chattopadhyay, Pratik
AU - Kumar, Sathish
PY - 2023/7/1
Y1 - 2023/7/1
N2 - Gait and face are the two major biometric features that need to be de-identified to preserve the privacy of an individual and conceal his/her identity. Unlike face de-identification, no research has been conducted on gait de-identification to date. A few existing body/silhouette de-identification approaches use blurring and other primitive image processing techniques that are not robust to varying input environments and tend to remove non-biometric features like appearance and activity. In this paper, we propose a plausible deep learning-based solution to the gait de-identification problem. First, a set of key walking poses is determined from a large gallery set. Next, given an input sequence, a graph-based path search algorithm is employed to classify each frame of the sequence into the appropriate key pose. Next, a random frame with matched key pose chosen from the subset of the gallery sequences is considered the target frame. The dense pose features of the input and target frames are then fused using our proposed gait transformation network (GTNet), which is trained using a combination of perceptual loss, L1 loss, and adversarial loss. Training and testing of the model have been conducted using the RGB sequences present in the TUM-GAID and CASIA-B gait data. Promising de-identification results are obtained both qualitatively and quantitatively.
AB - Gait and face are the two major biometric features that need to be de-identified to preserve the privacy of an individual and conceal his/her identity. Unlike face de-identification, no research has been conducted on gait de-identification to date. A few existing body/silhouette de-identification approaches use blurring and other primitive image processing techniques that are not robust to varying input environments and tend to remove non-biometric features like appearance and activity. In this paper, we propose a plausible deep learning-based solution to the gait de-identification problem. First, a set of key walking poses is determined from a large gallery set. Next, given an input sequence, a graph-based path search algorithm is employed to classify each frame of the sequence into the appropriate key pose. Next, a random frame with matched key pose chosen from the subset of the gallery sequences is considered the target frame. The dense pose features of the input and target frames are then fused using our proposed gait transformation network (GTNet), which is trained using a combination of perceptual loss, L1 loss, and adversarial loss. Training and testing of the model have been conducted using the RGB sequences present in the TUM-GAID and CASIA-B gait data. Promising de-identification results are obtained both qualitatively and quantitatively.
KW - CGAN
KW - GTNet
KW - Gait de-identification
KW - Key pose mapping
KW - Pose and appearance preservation
UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85141762507&origin=inward
UR - https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85141762507&origin=inward
U2 - 10.1007/s11760-022-02386-x
DO - 10.1007/s11760-022-02386-x
M3 - Article
SN - 1863-1703
VL - 17
SP - 1753
EP - 1761
JO - Signal, Image and Video Processing
JF - Signal, Image and Video Processing
IS - 5
ER -