Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Multi-exposure image fusion based on tensor decomposition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

In this paper, a multi-exposure image fusion (MEF) method is proposed based on tensor decomposition and saliency model. The main innovation of the proposed method is to explore a tensor domain for MEF and define the fusion rules based on tensor feature of higher order singular value decomposition (HOSVD) and saliency. Specifically, RGB images are converted to YCbCr images to maintain the stability of color information. For luminance channels, luminance patches of luminance images are constructed 3-order sub-tensors, and HOSVD is used to extract features of sub-tensors. Then, the sum of absolute coefficients (SAC) of weight coefficients are defined. Meanwhile, considering the impact of saliency on visual perception, visual saliency maps (VSMs) is used to evaluate luminance patches quality and guide the fusion rules to define the rule of fusion. For chrominance channels, VSMs of the chrominance channels is used to define fused rule. The experimental results show that the fused image with more texture details and saturated color is successfully generated by proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Bergqvist G, Larsson EG (2010) The higher-order singular value decomposition: Theory and application. IEEE Signal Process Mag 27:151–154. https://doi.org/10.1109/MSP.2010.936030

    Article  Google Scholar 

  2. Cauwerts C, Piderit MB (2018) Application of high-dynamic range imaging techniques in architecture: A step toward high-quality Daylit interiors. J Imaging 4:19. https://doi.org/10.3390/jimaging4010019

    Article  Google Scholar 

  3. Chiang JC, Kao PH, Chen YS, Ren W (2017) High-dynamic-range image generation and coding for multi-exposure multi-view images. Circuits Syst Signal Process 36:2786–2814. https://doi.org/10.1007/s00034-016-0437-x

    Article  Google Scholar 

  4. Fan DP, Cheng MM, Liu JJ, Gao SH, Hou QB, Borji A (2018) Salient objects in clutter: Bringing salient object detection to the foreground. European Conference on Computer Vision:196–212. https://doi.org/10.1007/978-3-030-01267-0_12

  5. Gu B, Li WJ, Wong JT, Zhu MY, Wang MH (2012) Gradient field multi-exposure images fusion for high dynamic range image visualization. J Vis Commun Image Represent 23:604–610. https://doi.org/10.1016/j.jvcir.2012.02.009

    Article  Google Scholar 

  6. He XY, Zhang W, HFML Z, Li YB (2019) Reversible data hiding for high dynamic range images using edge information. Multimed Tools Appl 78:29137–19160. https://doi.org/10.1007/s11042-018-6589-x

    Article  Google Scholar 

  7. Khan IR, Rahardja S, Khan MM, Movania MM, Abed F (2018) A tone-mapping technique based on histogram using a sensitivity model of the human visual system. IEEE Trans Ind Electron 65:3469–3479. https://doi.org/10.1109/TIE.2017.2760247

    Article  Google Scholar 

  8. Kinoshita Y, Kiya H (2019) Scene segmentation-based luminance adjustment for multi-exposure image fusion. IEEE Trans Image Process 28:4101–4116. https://doi.org/10.1109/TIP.2019.2906501

    Article  MathSciNet  MATH  Google Scholar 

  9. Kolda TG, Bader BW (2009) Tensor decompositions and applications. SIAM Rev 51(3):455–500. https://doi.org/10.1137/07070111X

    Article  MathSciNet  MATH  Google Scholar 

  10. Lathauwer LD, Moor BD, Vandewalle J (2000) A multilinear singular value decomposition. SIAM J Matrix Anal Appl 21:1253–1278. https://doi.org/10.1137/S0895479896305696

    Article  MathSciNet  MATH  Google Scholar 

  11. Letexier D, Bourennane S (2008) Adaptive flattening for multidimensional image restoration. IEEE Signal Process Lett 15:229–232. https://doi.org/10.1109/LSP.2007.916045

    Article  Google Scholar 

  12. Li ZG, Zheng JH, Rahardja S (2012) Detail-enhanced exposure fusion. IEEE Trans Image Process 21:4672–4676. https://doi.org/10.1109/TIP.2012.2207396

    Article  MathSciNet  MATH  Google Scholar 

  13. Li ST, Kang XD, Hu JW (2013) Image fusion with guided filtering. IEEE Trans Image Process 22:2864–2875. https://doi.org/10.1109/TIP.2013.2244222

    Article  Google Scholar 

  14. Liang JL, He Y, Liu D, Zeng XJ (2012) Image fusion using higher order singular value decomposition. IEEE Tran Image Process 21:2898–2909. https://doi.org/10.1109/TIP.2012.2183140

    Article  MathSciNet  MATH  Google Scholar 

  15. Ma KD, Zeng K, Wang Z (2015) Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process 24:3345–3356. https://doi.org/10.1109/TIP.2015.2442920

    Article  MathSciNet  MATH  Google Scholar 

  16. Ma KD, Li H, Yong HW, Wang Z, Meng DY, Zhang L (2017) Robust multi-exposure image fusion: A structural patch decomposition approach. IEEE Trans Image Process 26:2519–2532. https://doi.org/10.1109/TIP.2017.2671921

    Article  MathSciNet  MATH  Google Scholar 

  17. Ma JL, Zhou ZQ, Wang B, Zong H (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8–17. https://doi.org/10.1016/j.infrared.2017.02.005

    Article  Google Scholar 

  18. Ma KD, Duanmu ZF, Yeganeh H, Wang Z (2018) Multi-exposure image fusion by optimizing a structural similarity index. IEEE Trans Comput Imaging 4:60–72. https://doi.org/10.1109/TCI.2017.2786138

    Article  MathSciNet  Google Scholar 

  19. Mertens T, Kautz J, Van Reeth F (2009) Exposure fusion: A simple and practical alternative to high dynamic range photography. Comput Graph Forum 28(1):161–171. https://doi.org/10.1111/j.1467-8659.2008.01171.x

    Article  Google Scholar 

  20. Mittal A, Soundararajan R, Bovik AC (2013) Making a ‘completely blind’ image quality analyzer. IEEE Signal Process Lett 20:209–212. https://doi.org/10.1109/LSP.2012.2227726

    Article  Google Scholar 

  21. Park JS, Soh JW, Cho NL (2019) Generation of high dynamic range illumination from a single image for the enhancement of undesirably illuminated images. Multimed Tools Appl 78:20263–20283. https://doi.org/10.1007/s11042-019-7384-z

    Article  Google Scholar 

  22. Prabhakar KR, Srikar VS, Babu RV (2017) DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. IEEE international conference on computer vision pp.4724-4732. https://doi.org/10.1109/ICCV.2017.505

  23. Přibyl B, Chalmers A, Zemčík P, Hooberman L, Ĉadík M (2016) Evaluation of feature point detection in high dynamic range imagery. J Vis Commun Image Represent 38:141–160. https://doi.org/10.1016/j.jvcir.2016.02.007

    Article  Google Scholar 

  24. Raman S, Chaudhuri S (2009) Bilateral filter based compositing for variable exposure photography. Eurographics pp.1-4. https://www.researchgate.net/publication/242935652 _Bilateral_Filter_Based_Compositing_for_Variable_Exposure_Photography

  25. Shao H, Jiang GYYM, Song Y, Jiang H, Peng ZJ, Chen F (2018) Halo-free multi-exposure image fusion based on sparse representation of gradient features. Appl Sci 8:1543. https://doi.org/10.3390/app8091543

    Article  Google Scholar 

  26. Shen R, Cheng I, Basu A (2013) QoE-based multi-exposure fusion in hierarchical multivariate Gaussian CRF. IEEE Trans Image Process 22:2469–2478. https://doi.org/10.1109/TIP.2012.2236346

    Article  MathSciNet  MATH  Google Scholar 

  27. Vonikakis V, Bouzos O, Andreadis I (2011) Multi-exposure image fusion based on illumination estimation. Signal and image processing and applications (SIPA) pp.135-142. https://doi.org/10.2316/P.2011.738-051

  28. Xydeas CS, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36:308–309. https://doi.org/10.1049/el:20000267

    Article  Google Scholar 

  29. Yuan L, Hu YS, Li D, Tan QW, Xu PZ (2020) Illumination consistency based on single low dynamic range images. Multimed Tools Appl 79:3189–3215. https://doi.org/10.1007/s11042-018-6799-2

    Article  Google Scholar 

  30. Zhang W, Cham WK (2012) Gradient-directed Multiexposure Composition. IEEE Trans Image Process 21:2318–2323. https://doi.org/10.1109/TIP.2011.2170079

    Article  MathSciNet  MATH  Google Scholar 

  31. Zhang CY, Luo XQ, Zhang ZC, Gao RC, Wu XJ (2015) Multi-focus image fusion method using higher order singular value decomposition and fuzzy reasoning. J Algorithms Comput Technol 9:303–321. https://doi.org/10.1260/1748-3018.9.3.303

    Article  MathSciNet  Google Scholar 

  32. Zhao JX, Liu JJ, Fan DP, Cao Y, Yang JF, Cheng MM (2019) EGNet: Edge guidance network for salient object detection. IEEE international conference on computer vision pp.8779-8788. https://doi.org/10.1109/ICCV.2019.00887

Download references

Acknowledgements

This work was supported by Natural Science Foundation of China (Grant No. 61971247), Zhejiang Natural Science Foundation of China (Grant No. LY19F020009, LQ20F010002), Natural Science Foundation of Ningbo (Grant No. 2019A610100, 2019A610101). It was also sponsored by the K.C.Wong Magna Fund in Ningbo University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haiyong Xu.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, S., Luo, T., Song, Y. et al. Multi-exposure image fusion based on tensor decomposition. Multimed Tools Appl 79, 23957–23975 (2020). https://doi.org/10.1007/s11042-020-09131-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-020-09131-x

Keywords

Navigation