Nothing Special   »   [go: up one dir, main page]

Skip to content
/ VapSR Public

models and code for "Efficient Image Super Resolution using Vast Receptive Field Attention"

Notifications You must be signed in to change notification settings

zhoumumu/VapSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VapSR

Efficient Image Super-Resolution using Vast-Receptive-Field Attention(ECCVW 2022). Paper link

Lin Zhou1∗, Haoming Cai1∗, Jinjin Gu, Zheyuan Li, Yingqi Liu, Xiangyu Chen, Yu Qiao, Chao Dong (* indicates equal contribution)

The attention mechanism plays a pivotal role in designing advanced super-resolution (SR) networks. In this work, we design an efficient SR network by improving the attention mechanism. We start from a simple pixel attention module and gradually modify it to achieve better super-resolution performance with reduced parameters. The specific approaches include: (1) increasing the receptive field of the attention branch, (2) replacing large dense convolution kernels with depthwise separable convolutions, and (3) introducing pixel normalization. These approaches paint a clear evolutionary roadmap for the design of attention mechanisms. Based on these observations, we propose VapSR, the Vast-receptive-field Pixel attention network. Experiments demonstrate the superior performance of VapSR. VapSR outperforms the present lightweight networks with even fewer parameters. And the light version of VapSR can use only 21.68% and 28.18% parameters of IMDB and RFDN to achieve similar performances to those networks.

Quick Intro: the presentation and the poster.

network.jpg

Performance of X4 scale (PSNR / SSIM on Y channel):

model Pararms[K] Set5 Set14 B100 Urban100
VapSR-S 155 32.14/0.8951 28.64/0.7826 27.60/0.7373 26.05/0.7852
VapSR 342 32.38/0.8978 28.77/0.7852 27.68/0.7398 26.35/0.7941

Code Directions

The code is constructed based on BasicSR. Before any testing or reproducing, make sure the installation and the datasets preparation are done correctly.

To keep the workspace clean and simple, only test.py, train.py and your_arch.py are needed here and then you are good to go.

This line in test.py and train.py enables them to register your arch:

from vapsr import vapsr

Testing

You can run the testing demo with

>>> CUDA_VISIBLE_DEVICES=0 python code/test.py -opt options/test/VapSR_X4.yml

Training

Reproduce with

>>> CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 code/train.py -opt options/train/VapSR_X4.yml --launcher pytorch

Contact

Feel free to contact us:

Lin Zhou📫(zhougrace885@gmail.com) 
Haoming Cai📫(hmcai@umd.edu) 
Jinjin Gu📫(jinjin.gu@sydney.edu.au) 
Zheyuan Li📫(zy.li3@siat.ac.cn) 
Yingqi Liu📫(yq.liu3@siat.ac.cn) 
Xiangyu Chen📫(chxy95@gmail.com) 
Yu Qiao📫(yu.qiao@siat.ac.cn) 
Chao Dong📫(chao.dong@siat.ac.cn) 

About

models and code for "Efficient Image Super Resolution using Vast Receptive Field Attention"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages