IR-MSDNet: Infrared and visible image fusion based on infrared features and multiscale dense network

A Raza, J Liu, Y Liu, J Liu, Z Li, X Chen… - IEEE Journal of …, 2021 - ieeexplore.ieee.org
A Raza, J Liu, Y Liu, J Liu, Z Li, X Chen, H Huo, T Fang
IEEE Journal of Selected Topics in Applied Earth Observations and …, 2021ieeexplore.ieee.org
Infrared (IR) and visible images are heterogeneous data, and their fusion is one of the
important research contents in the remote sensing field. In the last decade, deep networks
have been widely used in image fusion due to their ability to preserve high-level semantic
information. However, due to the lower resolution of IR images, deep learning-based
methods may not be able to retain the salient features of IR images. In this article, a novel IR
and visible image fusion based on IR Features & Multiscale Dense Network (IR-MSDNet) is …
Infrared (IR) and visible images are heterogeneous data, and their fusion is one of the important research contents in the remote sensing field. In the last decade, deep networks have been widely used in image fusion due to their ability to preserve high-level semantic information. However, due to the lower resolution of IR images, deep learning-based methods may not be able to retain the salient features of IR images. In this article, a novel IR and visible image fusion based on IR Features & Multiscale Dense Network (IR-MSDNet) is proposed to preserve the content and key target features from both visible and IR images in the fused image. It comprises an encoder, a multiscale decoder, a traditional processing unit, and a fused unit, and can capture incredibly rich background details in visible images and prominent target details in IR features. When the dense and multiscale features are fused, the background details are obtained by utilizing attention strategy, and then combined with complimentary edge features. While IR features are extracted by traditional quadtree decomposition and Bezier interpolation, and further intensified by refinement. Finally, both the decoded multiscale features and IR features are used to reconstruct the final fused image. Experimental evaluation with other state-of-the-art fusion methods validates the superiority of our proposed IR-MSDNet in both subjective and objective evaluation metrics. Additional objective evaluation conducted on the object detection (OD) task further verifies that the proposed IR-MSDNet has greatly enhanced the details in the fused images, which bring the best OD results.
ieeexplore.ieee.org