Nothing Special   »   [go: up one dir, main page]

Skip to content
/ DARP Public

Code for the paper "Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning" (NeurIPS 20)

License

Notifications You must be signed in to change notification settings

bbuing9/DARP

Repository files navigation

DARP: Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning

This repository contains code for the paper "Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning" by Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, and Jinwoo Shin.

Dependencies

  • python3
  • pytorch == 1.1.0
  • torchvision
  • progress
  • scipy
  • randAugment (Pytorch re-implementation: https://github.com/ildoonet/pytorch-randaugment)

Scripts

Please check out run.sh for the scripts to run the baseline algorithms and ours (DARP).

Training procedure of DARP

Train a network with baseline algorithm, e.g., MixMatch on CIFAR-10

python train.py --gpu 0 --semi_method mix --dataset cifar10 --ratio 2 --num_max 1500 --imb_ratio_l 100 --imb_ratio_u 1 \
--epoch 500 --val-iteration 500

Applying DARP on the baseline algorithm

#python train.py --gpu 0 --darp --est --alpha 2 --warm 200 --semi_method mix --dataset cifar10 --ratio 2 --num_max 1500 --imb_ratio_l 100 --imb_ratio_u 1  \
--epoch 500 --val-iteration 500

About

Code for the paper "Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning" (NeurIPS 20)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published