Nothing Special   »   [go: up one dir, main page]

Skip to content

(ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing" by Aiyu Cui, Daniel McKee and Svetlana Lazebnik

License

Notifications You must be signed in to change notification settings

cuiaiyu/dressing-in-order

Repository files navigation

Dressing in Order

👕 ICCV'21 Paper | 👖 Project Page | 👚 arXiv | 🎽 Video Talk | 👗 Running This Code

PWC

The official implementation of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing." by Aiyu Cui, Daniel McKee and Svetlana Lazebnik. (ICCV 2021)

Open In Colab

🔔 Updates

  • [2023/04] Offical Colab Demo is now available at Open In Colab. Data downloading and environment installation are included.
  • [2021/08] Please check our latest version of paper for the updated and clarified implementation details.
    • Clarification: the facial component was not added to the skin encoding as stated in the our CVPR 2021 workshop paper due to a minor typo. However, this doesn't affect our conclusions nor the comparison with the prior work, because it is an independent skin encoding design.
  • [2021/07] To appear in ICCV 2021.
  • [2021/06] The best paper at Computer Vision for Fashion, Art and Design Workshop CVPR 2021.

Supported Try-on Applications

Supported Editing Applications

More results

Play with Open In Colab!


Demo

A directly runable demo can be found in our Colab! Open In Colab!


Get Started for Bash Scripts

DeepFashion Dataset Setup

Deepfashion Dataset can be found from DeepFashion MultiModal Source.

To set up the dataset in your specified data folder $DATA_ROOT, run:

pip install --upgrade gdown
python tools/download_deepfashion_from_google_drive.py --dataroot $DATA_ROOT

This script will automatically download all the necessary data from Google Drives ( images source, parse source, annotation source) to your the specified $DATA_ROOT in desired format.

Environment Setup

Please install the environment based on your need.

1. Environment for Inference or Test (for metrics) Only

Required packages are

torch
torchvision
tensorboardX
scikit-image==0.16.2

The version of torch/torchvison is not restricted for inference.

2. Environment for Training

Note the training process requires CUDA functions provided by GFLA, which can only compile with torch=1.0.0.

To start training, please follow the installation instruction in GFLA to install the environment.

Then run pip install -r requirements.txt.

Download pretrained weights

The pretrained weights can be found here. Please unzip them under checkpoints/ directory.

(The checkpoints above are reproduced, so there could be slightly difference in quantitative evaluation from the reported results. To get the original results, please check our released generated images here.)

(DIORv1_64 was trained with a minor difference in code, but it may give better visual results in some applications. If one wants to try it, specify --netG diorv1.)


Training

Warmup the Global Flow Field Estimator

Note, if you don't want to warmup the Global Flow Field Estimator, you can extract its weights from GFLA by downloading the pretrained weights GFLA from here. (Check Issue #23 for how to extract weights from GFLA.)

Otherwise, run

sh scripts/run_pose.sh

Training

After warming up the flownet, train the pipeline by

sh scripts/run_train.sh

Run tensorboard --logdir checkpoints/$EXP_NAME/train to check tensorboard.

Note: Resetting discriminators may help training when it stucks at local minimals.


Evaluations

Download Generated Images

Here are our generated images which are used for the evaluation reported in the paper. (Deepfashion Dataset)

SSIM, FID and LPIPS

To run evaluation (SSIM, FID and LPIPS) on pose transfer task:

sh scripts/run_eval.sh

please always specific --frozen_flownet for inference.


Cite us!

If you find this work is helpful, please consider starring 🌟 this repo and citing us as

@InProceedings{Cui_2021_ICCV,
    author    = {Cui, Aiyu and McKee, Daniel and Lazebnik, Svetlana},
    title     = {Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-On and Outfit Editing},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {14638-14647}
}

Acknowledgements

This repository is built up on GFLA, pytorch-CycleGAN-and-pix2pix, PATN and MUNIT. Please be aware of their licenses when using the code.

Thanks a lot for the great work to the pioneer researchers!

About

(ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing" by Aiyu Cui, Daniel McKee and Svetlana Lazebnik

Topics

Resources

License

Stars

Watchers

Forks