TF-CLIP: Learning Text-Free CLIP for Video-Based Person Re-identification

Authors

  • Chenyang Yu School of Information and Communication Engineering, Dalian University of Technology, Dalian, China
  • Xuehu Liu School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan, China
  • Yingquan Wang School of Information and Communication Engineering, Dalian University of Technology, Dalian, China
  • Pingping Zhang School of Future Technology, School of Artificial Intelligence, Dalian University of Technology, Dalian, China
  • Huchuan Lu School of Information and Communication Engineering, Dalian University of Technology, Dalian, China School of Future Technology, School of Artificial Intelligence, Dalian University of Technology, Dalian, China Ningbo Institute, Dalian University of Technology, Ningbo, China

DOI:

https://doi.org/10.1609/aaai.v38i7.28500

Keywords:

CV: Image and Video Retrieval, CV: Representation Learning for Vision

Abstract

Large-scale language-image pre-trained models (e.g., CLIP) have shown superior performances on many cross-modal retrieval tasks. However, the problem of transferring the knowledge learned from such models to video-based person re-identification (ReID) has barely been explored. In addition, there is a lack of decent text descriptions in current ReID benchmarks. To address these issues, in this work, we propose a novel one-stage text-free CLIP-based learning framework named TF-CLIP for video-based person ReID. More specifically, we extract the identity-specific sequence feature as the CLIP-Memory to replace the text feature. Meanwhile, we design a Sequence-Specific Prompt (SSP) module to update the CLIP-Memory online. To capture temporal information, we further propose a Temporal Memory Diffusion (TMD) module, which consists of two key components: Temporal Memory Construction (TMC) and Memory Diffusion (MD). Technically, TMC allows the frame-level memories in a sequence to communicate with each other, and to extract temporal information based on the relations within the sequence. MD further diffuses the temporal memories to each token in the original features to obtain more robust sequence features. Extensive experiments demonstrate that our proposed method shows much better results than other state-of-the-art methods on MARS, LS-VID and iLIDS-VID.

Published

2024-03-24

How to Cite

Yu, C., Liu, X., Wang, Y., Zhang, P., & Lu, H. (2024). TF-CLIP: Learning Text-Free CLIP for Video-Based Person Re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 6764-6772. https://doi.org/10.1609/aaai.v38i7.28500

Issue

Section

AAAI Technical Track on Computer Vision VI