Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

CurveMEF: : Multi-exposure fusion via curve embedding network

Published: 25 September 2024 Publication History

Abstract

Multi-exposure image fusion (MEF) aims to combine multiple images with different exposures into a single image to improve visual quality and preserve details. This paper proposes a curve embedding network for MEF (CurveMEF), which formulates the MEF task as estimating optimal fusion curves based on state feedback using pixel intensities as state variables. The fusion curve is embedded into CurveNet, a lightweight deep network, as a physical prior constraint. Leveraging the proposed physical informed MEF method, the fusion curve can adaptively adjust the pixel distribution of overexposed and underexposed regions based on the pixel intensities, and distinguish the importance of source images. CurveMEF supports both RGB and luminance channel inputs, demonstrating its flexibility. Experimental results show that CurveMEF achieves competitive performance compared to state-of-the-art methods in both qualitative and quantitative analysis. Moreover, CurveNet requires significantly fewer parameters and training data, enabling fast training and inference. The proposed method delivers high-speed and high-quality fusion while significantly reducing computational, providing an efficient and cost-effective solution. The code is publicly available at: https://github.com/PiratePai/CurveMEF.

References

[1]
Mertens T., Kautz J., Van Reeth F., Exposure fusion: a simple and practical alternative to high dynamic range photography, Comput. Graph. Forum 28 (1) (2009) 161–171,.
[2]
Prabhakar K.R., Srikar V.S., Babu R.V., DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs, in: 2017 IEEE International Conference on Computer Vision, ICCV, IEEE, Venice, 2017, pp. 4724–4732,.
[3]
Li H., Wu X.-J., DenseFuse: a fusion approach to infrared and visible images, IEEE Trans. Image Process. 28 (5) (2019) 2614–2623,.
[4]
Liu Q., Leung H., Variable augmented neural network for decolorization and multi-exposure fusion, Inf. Fusion 46 (2019) 114–127,.
[5]
Peng P., Liu Y., Jing Z., Pan H., Zhang H., Ddfusion: An efficient multi-exposure fusion network with dense pyramidal convolution and de-correlation fusion, J. Vis. Commun. Image Represent. 97 (2023),.
[6]
Deng X., Zhang Y., Xu M., Gu S., Duan Y., Deep coupled feedback network for joint exposure fusion and image super-resolution, IEEE Trans. Image Process. 30 (2021) 3098–3112,.
[7]
He K., Sun J., Tang X., Single image haze removal using dark channel prior, IEEE Trans. Pattern Anal. Mach. Intell. 33 (12) (2011) 2341–2353,.
[8]
Li C., Guo C., Chen C.L., Learning to enhance low-light image via zero-reference deep curve estimation, IEEE Trans. Pattern Anal. Mach. Intell. (2021),. 1–1.
[9]
Debevec P.E., Malik J., Recovering high dynamic range radiance maps from photographs, in: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH ’97, ACM Press, Not Known, 1997, pp. 369–378,.
[10]
Robertson M., Borman S., Stevenson R., Dynamic range improvement through multiple exposures, in: Proceedings 1999 International Conference on Image Processing (Cat. 99CH36348), Vol. 3, IEEE, Kobe, Japan, 1999, pp. 159–163,.
[11]
Li Z.G., Zheng J.H., Rahardja S., Detail-enhanced exposure fusion, IEEE Trans. Image Process. 21 (11) (2012) 4672–4676,.
[12]
Durga Prasad Bavirisetti G.X., Junhao Zhao R.D., Liu G., Multi-scale guided image and video fusion: A fast and efficient approach, Circuits Systems Signal Process. 38 (12) (2019) 5576–5605,.
[13]
Wang Q., Chen W., Wu X., Li Z., Detail-enhanced multi-scale exposure fusion in YUV color space, IEEE Trans. Circuits Syst. Video Technol. 30 (8) (2020) 2418–2429,.
[14]
Li S., Kang X., Hu J., Image fusion with guided filtering, IEEE Trans. Image Process. 22 (7) (2013) 2864–2875,.
[15]
Shen J., Zhao Y., Yan S., Li X., Exposure fusion using boosting Laplacian pyramid, IEEE Trans. Cybern. 44 (9) (2014) 1579–1590,.
[16]
Paul S., Sevcenco I.S., Agathoklis P., Multi-exposure and multi-focus image fusion in gradient domain, J. Circuit Syst. Comput. 25 (10) (2016),.
[17]
Shen F., Zhao Y., Jiang X., Suwa M., Recovering high dynamic range by multi-exposure retinex, J. Vis. Commun. Image Represent. 20 (8) (2009) 521–531,.
[18]
V. Vonikakis, O. Bouzos, I. Andreadis, et al., Multi-exposure image fusion based on illumination estimation, in: Proc. IASTED SIPA, 2011, pp. 135–142.
[19]
Zhang Y., Liu Y., Sun P., Yan H., Zhao X., Zhang L., IFCNN: A general image fusion framework based on convolutional neural network, Inf. Fusion 54 (2020) 99–118,.
[20]
T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, L. Zitnick, Microsoft COCO: Common Objects in Context, in: ECCV, European Conference on Computer Vision, 2014.
[21]
Huang G., Liu Z., Van Der Maaten L., Weinberger K.Q., Densely connected convolutional networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2017, pp. 2261–2269,.
[22]
Xu H., Ma J., Le Z., Jiang J., Guo X., FusionDN: a unified densely connected network for image fusion, AAAI 34 (07) (2020) 12484–12491,.
[23]
Xu H., Ma J., Jiang J., Guo X., Ling H., U2fusion: a unified unsupervised image fusion network, IEEE Trans. Pattern Anal. Mach. Intell. 44 (1) (2022) 502–518,.
[24]
Qu L., Liu S., Wang M., Song Z., TransMEF: a transformer-based multi-exposure image fusion framework using self-supervised multi-task learning, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, 2022, pp. 2126–2134,.
[25]
Ma J., Tang L., Fan F., Huang J., Mei X., Ma Y., Swinfusion: Cross-domain long-range learning for general image fusion via swin transformer, IEEE/CAA J. Autom. Sin. 9 (7) (2022) 1200–1217,.
[26]
Wu K., Chen J., Ma J., DMEF: Multi-exposure image fusion based on a novel deep decomposition method, IEEE Trans. Multimed. 25 (2023) 5690–5703,.
[27]
Han D., Li L., Guo X., Ma J., Multi-exposure image fusion via deep perceptual enhancement, Inf. Fusion 79 (2022) 248–262,.
[28]
Zhang H., Ma J., IID-MEF: A multi-exposure fusion network based on intrinsic image decomposition, Inf. Fusion 95 (2023) 326–340,.
[29]
Ma L., Ma T., Xue X., Fan X., Luo Z., Liu R., Practical exposure correction: Great truths are always simple, 2022, arXiv:2212.14245.
[30]
Cai J., Gu S., Zhang L., Learning a deep single image contrast enhancer from multi-exposure images, IEEE Trans. Image Process. 27 (4) (2018) 2049–2062,.
[31]
Afifi M., Derpanis K.G., Ommer B., Brown M.S., Learning multi-scale photo exposure correction, in: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2021, pp. 9153–9163,.
[32]
Zhang X., Benchmarking and comparing multi-exposure image fusion algorithms, Inf. Fusion 74 (2021) 111–131,.
[33]
Zhang H., Xu H., Xiao Y., Guo X., Ma J., Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity, AAAI 34 (07) (2020) 12797–12804,.
[34]
Xu H., Ma J., Zhang X.-P., MEF-GAN: Multi-exposure image fusion via generative adversarial networks, IEEE Trans. Image Process. 29 (2020) 7203–7216,.
[35]
Cheng C., Wu X.-J., Xu T., Chen G., UNIFusion: A lightweight unified image fusion network, IEEE Trans. Instrum. Meas. 70 (2021) 1–14,.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Neurocomputing
Neurocomputing  Volume 596, Issue C
Sep 2024
611 pages

Publisher

Elsevier Science Publishers B. V.

Netherlands

Publication History

Published: 25 September 2024

Author Tags

  1. Multi-exposure fusion
  2. Image fusion
  3. Curve estimation
  4. Physical informed neural network
  5. Deep learning

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 14 Dec 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media