Nothing Special   »   [go: up one dir, main page]

Skip to content

Crafting a Toolchain for Image Restoration by Deep Reinforcement Learning (CVPR 2018 Spotlight)

License

Notifications You must be signed in to change notification settings

yuke93/RL-Restore

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RL-Restore [project page][paper]

🚩 Support arbitrary input size. Aug 25
🚩 Add Python3 compatibility. Aug 6
🚩 Training code is ready! Jun 15

Overview

  • Framework

  • Synthetic & real-world results

Prerequisite

  • Anaconda is highly recommended as you can easily adjust the environment setting.

    pip install opencv-python scipy tqdm h5py
    
  • We have tested our code under the following settings:

    Python TensorFlow CUDA cuDNN
    2.7 1.3 8.0 5.1
    3.5 1.4 8.0 5.1
    3.6 1.10 9.0 7.0

Test

  • Start testing on synthetic dataset

    python main.py --dataset moderate
    

    dataset: choose a test set among mild, moderate and severe

  • ❗ Start testing on real-world data (support arbitrary input size)

    python main.py --dataset mine
    
    • You may put your own test images in data/test/mine/
  • Dataset

    • All test sets can be downloaded at Google Drive or Baidu Cloud.

    • Replace test_images/ with the downloaded data and play with the whole dataset.

  • Naming rules

    • Each saved image name refers to a selected toolchain. Please refer to my second reply in this issue.

Train

  • Download training images

    • Download training images (down-sampled DIV2K images) at Google Drive or Baidu Cloud.

    • Move the downloaded file to data/train/ and unzip.

  • Generate training data

    • Run data/train/generate_train.m to generate training data in HDF5 format.

    • You may generate multiple .h5 files in data/train/

  • Let's train!

    python main.py --is_train True
    
    • When you observe reward_sum is increasing, it indicates training is going well.

    • You can visualize reward increasing by TensorBoard.

Acknowledgement

The DQN algorithm is modified from DQN-tensorflow.

Citation

@inproceedings{yu2018crafting,
 author = {Yu, Ke and Dong, Chao and Lin, Liang and Loy, Chen Change},
 title = {Crafting a Toolchain for Image Restoration by Deep Reinforcement Learning},
 booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition},
 pages={2443--2452},
 year = {2018} 
}

About

Crafting a Toolchain for Image Restoration by Deep Reinforcement Learning (CVPR 2018 Spotlight)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published