Long-Term Cloth-Changing Person Re-identification

Fudan University     University of Oxford     University of Surrey


Person re-identification (Re-ID) aims to match a target person across camera views at different locations and times. Existing Re-ID studies focus on the short-term cloth-consistent setting, under which a person re-appears in different camera views with the same outfit. A discriminative feature representation learned by existing deep Re-ID models is thus dominated by the visual appearance of clothing. In this work, we focus on a much more difficult yet practical setting where person matching is conducted over long-duration, e.g., over days and months and therefore inevitably under the new challenge of changing clothes. This problem, termed Long-Term Cloth-Changing (LTCC) Re-ID is much understudied due to the lack of large scale datasets. The first contribution of this work is a new LTCC dataset containing people captured over a long period of time with frequent clothing changes. As a second contribution, we propose a novel Re-ID method specifically designed to address the cloth-changing challenge. Specifically, we consider that under cloth-changes, soft-biometrics such as body shape would be more reliable. We, therefore, introduce a shape embedding module as well as a cloth-elimination shape-distillation module aiming to eliminate the now unreliable clothing appearance features and focus on the body shape information. Extensive experiments show that superior performance is achieved by the proposed model on the new LTCC dataset.


Illustration of the long-term cloth-changing Re-ID task and dataset. The task is to match the same person under cloth-changes from different views, and the dataset contains same identities with diverse clothes.

Long-Term Cloth-Changing (LTCC) Dataset

To facilitate the study of Long-Term Cloth-Changing (LTCC) Re-ID, we collect a new LTCC person Re-ID dataset. LTCC contains 17,138 person images of 152 identities, and each identity is captured by at least two cameras. To further explore the cloth-changing Re-ID scenario, we assume that different people will not wear identical outfits (however visually similar they may be), and annotate each image with a cloth label as well. Note that the changes of the hairstyle or carrying items, e.g., hat, bag or laptop, do not affect the cloth label. Finally, dependent on whether there is a cloth-change, the dataset can be divided into two subsets: one cloth-change set where 91 persons appearing with 417 different sets of outfits in 14,756 images, and one cloth-consistent subset containing the remaining 61 identities with 2,382 images without outfit changes. On average, there are 5 different clothes for each cloth-changing person, with the numbers of outfit changes ranging from 2 to 14.

LTCC Dataset

Examples of some people wearing the same and different clothes in LTCC dataset. There exists various illumination, occlusion, camera view, carrying and pose changes. This dataset will be released soon.

Cloth-Elimination Shape-Distillation (CESD) Module

With cloth-changing now commonplace in LTCC Re-ID, existing Re-ID models are expected to struggle because they assume that the clothing appearance is consistent and relies on clothing features to distinguish people from each other. Our key idea is to remove the cloth-appearance related information completely and only focus on view/pose-change-insensitive body shape information. To this end, we introduce a Shape Embedding (SE) to help shape feature extraction and a Cloth-Elimination Shape-Distillation (CESD) module to eliminate cloth-related information.

Illustration of our framework and the details of Cloth-Elimination ShapeDistillation (CESD) module. Here, we introduce Shape Embedding (SE) module to extract structural features from human keypoints, followed by learning identity-sensitive and cloth-insensitive representations using the CESD module.

Paper, code and dataset

Long-Term Cloth-Changing Person Re-identification

Xuelin Qian, Wenxuan Wang, Li Zhang, Fangrui Zhu, Yanwei Fu, Tao Xiang, Yu-Gang Jiang, Xiangyang Xue.

[Paper] [Bibtex] [Code and Dataset](coming soon)



The website is modified from this template.