YouReID is a light research framework that implements some state-of-the-art person re-identification algorithms for some reid tasks and provides some strong baseline models.
- Concise and easy: Simple framework, easy to use and customize. You can get started in 5 minutes.
- Higher efficiency: Mixed precision and DistributedDataParallel training are supported. You can run over the baseline model in 25 minutes using two 16GB V100 on the Market-1501 dataset.
- Strong: Some baseline methods are supported, including baseline, PCB, MGN. Specially the performance of baseline model arrives mAP=87.65% and rank-1=94.80% on the Market-1501 dataset.
- Rich model zoo: State-of-the-art methods for some reid tasks like Occluded/UDA/Cross-modal are supported.
this project provides the following algorithms and scripts to run them. Please see the details in the link provided in the description column
Field | ABBRV | Algorithms | Description | Status |
---|---|---|---|---|
SL | CACENET | Devil's in the Details: Aligning Visual Clues for Conditional Embedding in Person Re-Identification | CACENET.md | finished |
Pyramid | Pyramidal Person Re-IDentification via Multi-Loss Dynamic Training | CVPR-2019-Pyramid.md | finished | |
Text | NAFS | Contextual Non-Local Alignment over Full-Scale Representation for Text-Based Person Search | NAFS.md | finished |
UDA | ACT | Asymmetric Co-Teaching for Unsupervised Cross Domain Person Re-Identification | AAAI-2020-ACT.md | comming soon |
Video | TSF | Rethinking Temporal Fusion for Video-based Person Re-identification on Semantic and Time Aspect | AAAI-2020-TSF.md | comming soon |
3D | Person-ReID-3D | Learning 3D Shape Feature for Texture-insensitive Person Re-identification | CVPR-2021-PR3D.md | waited |
Occluded | PartNet | Human Pose Information Discretization for Occluded Person Re-Identification | PartNet.md | waited |
You also can find these models in model_zoo Specially we contribute some reid samples to opencv community, you can use these model in opencv, and you also can visit them at ReID_extra_testdata.
Please install Python>=3.6
and PyTorch>=1.6.0
.
Download the public datasets(like market1501 and DukeMTMC), organize these datasets using the following format:
File Directory:
├── partitions.pkl
├── images
│ ├── 0000000_0000_000000.png
│ ├── 0000001_0000_000001.png
│ ├── ...
-
Rename the images in following convention: "000000_000_000000.png" where the first substring splitted by underline is the person identity; for the second substring, the first digit is the camera id and the rest is track id; and the third substring is an image offset.
-
"partitions.pkl" file This file contains a python dictionary storing meta data of the datasets, which contains folling key value pairs "train_im_names": [list of image names] #storing a list of names of training images "train_ids2labels":{"identity":label} #a map that maps the person identity string to a integer label "val_im_names": [list of image names] #storing a list of names of validation images "test_im_names": [list of image names] #storing a list of names of testing images "test_marks"/"val_marks": [list of 0/1] #0/1 indicates if an image is in gallery
you can run tools/transform_format.py to get the formatted dataset or download from formatted market1501
git clone this repository
- Configure basic settings in core/config
- Define the network in net and register in the factory.py
- Set the corresponding hyperparameters in the example yaml
- set example.yaml path in config.yaml
- set port and gpu config in cmd.sh
- cd train && ./cmd.sh
cd train && ./cmd.sh
If you are interested in our works, please cite our papers