Nothing Special   »   [go: up one dir, main page]

Skip to content

This is a lightweight GAN developed for real-time deblurring. The model has a super tiny size and a rapid inference time. The motivation is to boost marker detection in robotic applications, however, you may use it for other applications definitely.

License

Notifications You must be signed in to change notification settings

York-SDCNLab/Ghost-DeblurGAN

Repository files navigation

Ghost-DeblurGAN [IROS 2022]

35_graph

This is the repository of our IROS 2022 paper Application of Ghost-DeblurGAN to Fiducial Marker Detection
An introduction video is available at https://www.bilibili.com/video/BV1r14y1Y7M3/?spm_id_from=333.999.0.0

Feature extraction or localization based on the fiducial marker could fail due to motion blur in real-world robotic applications. To solve this problem, a lightweight generative adversarial network, named Ghost-DeblurGAN, for real-time motion deblurring is developed. Furthermore, on account that there is no existing deblurring benchmark for such task, a new large-scale dataset, YorkTag, is proposed that provides pairs of sharp/blurred images containing fiducial markers. With the proposed model trained and tested on YorkTag, it is demonstrated that when applied along with fiducial marker systems to motion-blurred images, Ghost-DeblurGAN improves the marker detection significantly.

The implementation is modified from https://github.com/VITA-Group/DeblurGANv2.

Visual Comparison

Visual comparison of marker detection with and without Ghost-DeblurGAN in robotic applications.
A video captured by a downwards camera onboard a maneuvering UAV (Qdrone, from the Quanser Inc. )

A video captured by a low-cost CSI camera onboard a moving UGV (Qcar, from the Quanser Inc.)

Why it is necessary to propose a new dataset?

Current deblurring benchmarks only contain routine scenes including pedestrians, cars, buildings, and human faces, etc. To illustrate the necessity of proposing a new deblurring benchmark containing fiducial markers, we test HINet which has the SOTA performance on GoPro dataset with a blurred image and apply the Apriltag Detector to the deblurred image (See Fig.1(d)). As shown in the figure, due to the fact that HINet is trained on GoPro dataset which contains no fiducial markers, the marker detection rate is far from satisfying. Again, note that HINet is the SOTA method.

To end this, we propose a new large-scale dataset, YorkTag, that provides paired blurred and sharp images containing AprilTags and ArUcos. For the sake of obtaining ideal sharp images, we employ the iPhone 12 with the DJI OM 4 stabilizer to capture high-resolution videos. Detailed introduction of the blurred and sharp image pairs generation is available in our paper. Our training set consists of 1577 image pairs, and test set consists of 497 image pairs totalling 2074 blurry-sharp image pairs. We will keep augmenting the yorktag dataset later on.
Link to the YorkTag dataset utilized in our paper: https://drive.google.com/file/d/1S3wVptR_mzrntuCtEarkXHE6d1zN6jd3/view?usp=sharing

Training

Command

python train.py A video tutorial is available at: https://www.youtube.com/watch?v=JSCA2x3NBHs
By default training script will load conifguration from config/config.yaml files_a parameter represents blurry images and files_b represents sharp images modify config.yaml file to change the generator model. Available model scripts are:

  • Ghostnet + Half Instance Normalization (HIN) + Ghost module (GM)
  • MobilenetV2

Testing and Inference

For single image inference, python predict.py /path/to/image.png --weights_path=/path/to/weights
by default output is written under submit directory

Note: 'model' parameters in config.yaml must correspond to the weights
For testing on single image,
python test_metrics.py --img_folder=/path/to/image.png --weights_path=/path/to/weights --new_gopro
For testing on the dataset utilized in this work,
python test_metrics.py --img_folder=/base/directory/of/GOPRO/test/blur --weights_path=/path/to/weights --new_gopro

Pre-trained models

For fair comparison we used the same mobilenet model as the original DeblurGANv2 and trained all models from scratch on the GOPRO dataset. The metrics in the above table are to illustrate the superiority of Ghost-DeblurGAN over the original deblurGAN-v2 (mobilenetV2). Note that to obtain the deblurring performance shown in the visual comparison, the weights trained on the mix of YorkTag and GoPro should be adopted. These weights are coming soon.

Dataset Model FLOPs PSNR/ SSIM Link
GoPro Test Dataset DeblurGAN-v2 (MobileNetV2) 43.75G 28.40/ 0.917 fpn_mobilnet_v2.h5
Ghost-DeblurGAN (Ours) 20.51G 28.79/ 0.920 fpn_ghostnet_gm_hin.h5

Citation

If you find this work helpful for your research, please cite our paper:

@INPROCEEDINGS{9981701,
  author={Liu, Yibo and Haridevan, Amaldev and Schofield, Hunter and Shan, Jinjun},
  booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, 
  title={Application of Ghost-DeblurGAN to Fiducial Marker Detection}, 
  year={2022},
  volume={},
  number={},
  pages={6827-6832},
  doi={10.1109/IROS47612.2022.9981701}}

About

This is a lightweight GAN developed for real-time deblurring. The model has a super tiny size and a rapid inference time. The motivation is to boost marker detection in robotic applications, however, you may use it for other applications definitely.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages