Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Robust Multi-Exposure Image Fusion: A Structural Patch Decomposition Approach

Published: 01 May 2017 Publication History

Abstract

We propose a simple yet effective structural patch decomposition approach for multi-exposure image fusion (MEF) that is robust to ghosting effect. We decompose an image patch into three conceptually independent components: signal strength, signal structure, and mean intensity. Upon fusing these three components separately, we reconstruct a desired patch and place it back into the fused image. This novel patch decomposition approach benefits MEF in many aspects. First, as opposed to most pixel-wise MEF methods, the proposed algorithm does not require post-processing steps to improve visual quality or to reduce spatial artifacts. Second, it handles RGB color channels jointly, and thus produces fused images with more vivid color appearance. Third and most importantly, the direction of the signal structure component in the patch vector space provides ideal information for ghost removal. It allows us to reliably and efficiently reject inconsistent object motions with respect to a chosen reference image without performing computationally expensive motion estimation. We compare the proposed algorithm with 12 MEF methods on 21 static scenes and 12 deghosting schemes on 19 dynamic scenes (with camera and object motion). Extensive experimental results demonstrate that the proposed algorithm not only outperforms previous MEF algorithms on static scenes but also consistently produces high quality fused images with little ghosting artifacts for dynamic scenes. Moreover, it maintains a lower computational cost compared with the state-of-the-art deghosting schemes. 1 1
The MATLAB code of the proposed algorithm will be made available online. Preliminary results of Section III-A [1] were presented at the IEEE International Conference on Image Processing, Canada, 2015.

References

[1]
K. Ma and Z. Wang, “Multi-exposure image fusion: A patch-wise approach,” in Proc. IEEE Int. Conf. Imag. Process., Sep. 2015, pp. 1717–1721.
[2]
E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting . San Mateo, CA, USA: Morgan Kaufmann, 2010.
[3]
M. D. Grossberg and S. K. Nayar, “Determining the camera response from images: What is knowable?” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 25, no. Issue 11, pp. 1455–1467, 2003.
[4]
J.-Y. Lee, Y. Matsushita, B. Shi, I. S. Kweon, and K. Ikeuchi, “Radiometric calibration by rank minimization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 35, no. Issue 1, pp. 144–156, 2013.
[5]
K. Ma, H. Yeganeh, K. Zeng, and Z. Wang, “High dynamic range image compression by optimizing tone mapped image quality index,” IEEE Trans. Image Process., vol. Volume 24, no. Issue 10, pp. 3086–3097, 2015.
[6]
P. J. Burt, “<chapter-title>The pyramid as a structure for efficient computation</chapter-title>,” in Multiresolution Image Processing and Analysis . Springer, 1984.
[7]
K. Ma, K. Zeng, and Z. Wang, “Objective quality assessment for color-to-gray image conversion,” IEEE Trans. Image Process., vol. Volume 24, no. Issue 11, pp. 3345–3356, 2015.
[8]
B. Gu, W. Li, J. Wong, M. Zhu, and M. Wang, “Gradient field multi-exposure images fusion for high dynamic range image visualization,” J. Vis. Commun. Imag. Represent., vol. Volume 23, no. Issue 4, pp. 604–610, 2012.
[9]
T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion: A simple and practical alternative to high dynamic range photography,” Comput. Graph. Forum, vol. Volume 28, no. Issue 1, pp. 161–171, 2009.
[10]
Z. G. Li, J. H. Zheng, and S. Rahardja, “Detail-enhanced exposure fusion,” IEEE Trans. Image Process., vol. Volume 21, no. Issue 11, pp. 4672–4676, 2012.
[11]
S. Raman and S. Chaudhuri, “Bilateral filter based compositing for variable exposure photography,” in Proc. Eurographics, 2009, pp. 1–4.
[12]
S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process., vol. Volume 22, no. Issue 7, pp. 2864–2875, 2013.
[13]
S. Li and X. Kang, “Fast multi-exposure image fusion with median filter and recursive filter,” IEEE Trans. Consum. Electron., vol. Volume 58, no. Issue 2, pp. 626–632, 2012.
[14]
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. Volume 13, no. Issue 4, pp. 600–612, 2004.
[15]
B. Zitová and J. Flusser, “Image registration methods: A survey,” Image Vis. Comput., vol. Volume 21, pp. 977–1000, 2003.
[16]
G. Ward, “Fast, robust image registration for compositing high dynamic range photographs from hand-held exposures,” J. Graph. Tools, vol. Volume 8, no. Issue 2, pp. 17–30, 2003.
[17]
D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., vol. Volume 60, no. Issue 2, pp. 91–110, 2004.
[18]
P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shechtman, “Robust patch-based HDR reconstruction of dynamic scenes,” ACM Trans. Graph., vol. Volume 31, no. Issue 6, p. pp.203, 2012.
[19]
J. Hu, O. Gallo, and K. Pulli, “Exposure stacks of live scenes with hand-held cameras,” in Proc. Eur. Conf. Comput. Vis., 2012, pp. 499–512.
[20]
J. Hu, O. Gallo, K. Pulli, and X. Sun, “HDR deghosting: How to deal with saturation?” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2013, pp. 1163–1170.
[21]
X. Qin, J. Shen, X. Mao, X. Li, and Y. Jia, “Robust match fusion using optimization,” IEEE Trans. Cybern., vol. Volume 45, no. Issue 8, pp. 1549–1560, 2015.
[22]
C. Lee, Y. Li, and V. Monga, “Ghost-free high dynamic range imaging via rank minimization,” IEEE Signal Process. Lett., vol. Volume 21, no. Issue 9, pp. 1045–1049, 2014.
[23]
T. H. Oh, J. Y. Lee, Y. W. Tai, and I. S. Kweon, “Robust high dynamic range imaging by rank minimization,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 37, no. Issue 6, pp. 1219–1232, 2015.
[24]
P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun., vol. Volume 31, no. Issue 4, pp. 532–540, 1983.
[25]
P. J. Burt and R. J. Kolczynski, “Enhanced image capture through fusion,” in Proc. IEEE Int. Conf. Comput. Vis., May 1993, pp. 173–182.
[26]
J. Shen, Y. Zhao, S. Yan, and X. Li, “Exposure fusion using boosting Laplacian pyramid,” IEEE Trans. Cybern., vol. Volume 44, no. Issue 9, pp. 1579–1590, 2014.
[27]
C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proc. IEEE Int. Conf. Comput. Vis., Jan. 1998, pp. 839–846.
[28]
K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. Volume 35, no. Issue 6, pp. 1397–1409, 2013.
[29]
E. S. L. Gastal and M. M. Oliveira, “Domain transform for edge-aware image and video processing,” ACM Trans. Graph., vol. Volume 30, no. Issue 4, p. pp.69, 2011.
[30]
W. Zhang and W.-K. Cham, “Gradient-directed multiexposure composition,” IEEE Trans. Image Process., vol. Volume 21, no. Issue 4, pp. 2318–2323, 2012.
[31]
Z. Li, J. Zheng, Z. Zhu, W. Yao, and S. Wu, “Weighted guided image filtering,” IEEE Trans. Image Process., vol. Volume 24, no. Issue 1, pp. 120–129, 2015.
[32]
M. Song, D. Tao, C. Chen, J. Bu, J. Luo, and C. Zhang, “Probabilistic exposure fusion,” IEEE Trans. Image Process., vol. Volume 21, no. Issue 1, pp. 341–357, 2012.
[33]
R. Shen, I. Cheng, and A. Basu, “QoE-based multi-exposure fusion in hierarchical multivariate Gaussian CRF,” IEEE Trans. Image Process., vol. Volume 22, no. Issue 6, pp. 2469–2478, 2013.
[34]
M. Bertalmio and S. Levine, “Variational approach for the fusion of exposure bracketed pairs,” IEEE Trans. Image Process., vol. Volume 22, no. Issue 2, pp. 712–723, 2013.
[35]
K. Hara, K. Inoue, and K. Urahama, “A differentiable approximation approach to contrast-aware image fusion,” IEEE Signal Process. Lett., vol. Volume 21, no. Issue 6, pp. 742–745, 2014.
[36]
A. A. Goshtasby, “Fusion of multi-exposure images,” Image Vis. Comput., vol. Volume 23, no. Issue 6, pp. 611–618, 2005.
[37]
O. Gallo, N. Gelfand, W.-C. Chen, M. Tico, and K. Pulli, “Artifact-free high dynamic range imaging,” in Proc. IEEE Int. Conf. Comput. Photogr., Apr. 2009, pp. 1–7.
[38]
D. Simakov, Y. Caspi, E. Shechtman, and M. Irani, “Summarizing visual data using bidirectional similarity,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2008, pp. 1–8.
[39]
A. Eden, M. Uyttendaele, and R. Szeliski, “Seamless image stitching of scenes with large motions and exposure differences,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., vol. Volume 2 . Jun. 2006, pp. 2498–2505.
[40]
Y. HaCohen, E. Shechtman, D. B. Goldman, and D. Lischinski, “Non-rigid dense correspondence with applications for image enhancement,” ACM Trans. Graph., vol. Volume 30, no. Issue 4, pp. 1–10, 2011.
[41]
C. Barnes, E. Shechtman, D. B. Goldman, and A. Finkelstein, “The generalized patchmatch correspondence algorithm,” in Proc. Eur. Conf. Comput. Vis., 2010, pp. 29–43.
[42]
S. B. Kang, M. Uyttendaele, S. Winder, and R. Szeliski, “High dynamic range video,” ACM Trans. Graph., vol. Volume 22, no. Issue 3, pp. 319–325, 2003.
[43]
Z. Li, J. Zheng, Z. Zhu, and S. Wu, “Selectively Detail-Enhanced Fusion of Differently Exposed Images With Moving Objects,” IEEE Trans. Image Process., vol. Volume 23, no. Issue 10, pp. 4372–4382, 2014.
[44]
H. Zimmer, A. Bruhn, and J. Weickert, “Freehand HDR imaging of moving scenes with simultaneous resolution enhancement,” Comput. Graph. Forum, vol. Volume 30, no. Issue 2, pp. 405–414, 2011.
[45]
K. Jacobs, C. Loscos, and G. Ward, “Automatic high-dynamic range image generation for dynamic scenes,” IEEE Comput. Graph. Appl., vol. Volume 28, no. Issue 2, pp. 84–93, 2008.
[46]
F. Pece and J. Kautz, “Bitmap movement detection: HDR for dynamic scenes,” in Proc. IEEE Conf. Vis. Media Prod., Nov. 2010, pp. 1–8.
[47]
E. A. Khan, A. O. Akyüz, and E. Reinhard, “Ghost removal in high dynamic range images,” in Proc. IEEE Int. Conf. Image Process., Oct. 2006, pp. 2005–2008.
[48]
L. G. Brown, “A survey of image registration techniques,” ACM Comput. Surv., vol. Volume 24, no. Issue 4, pp. 325–376, 1992.
[49]
(2016). HDR Photography Gallery Samples . {Online}. Available: http://www.easyhdr.com/examples
[50]
(2016). Dani Lischinski HDR Webpage . {Online}. Available: http://www.cs.huji.ac.il/~/hdr/pages/belgium.html
[51]
(2016). HDR Projects-Software . {Online}. Available: http://www.projects-software.com/HDR
[52]
(2016). Martin Čadík HDR Webpage . {Online}. Available: http://cadik.posvete.cz/tmo
[53]
(2016). HDRsoft Gallery . {Online}. Available: http://www.hdrsoft.com/gallery
[54]
(2016). Chaman Singh Verma HDR Webpage . {Online}. Available: http://pages.cs.wisc.edu/~/CS766_09/HDRI/hdr.html
[55]
(2016). MATLAB HDR Webpage . {Online}. Available: http://www.mathworks.com/help/images/ref/makehdr.html
[56]
(2016). HDR Pangeasoft . {Online}. Available: http://pangeasoft.net/pano/bracketeer/
[57]
. {Online}. Available: http://www.photographerstoolbox.com/products/lrenfuse.php, 2016.
[58]
R. Shen, I. Cheng, J. Shi, and A. Basu, “Generalized random walks for fusion of multi-exposure images,” IEEE Trans. Image Process., vol. Volume 20, no. Issue 12, pp. 3634–3646, 2011.
[59]
N. D. Bruce, “Expoblend: Information preserving exposure blending based on normalized log-domain entropy,” Comput. Graph., vol. Volume 39, pp. 12–23, 2014.
[60]
K. Zeng, K. Ma, R. Hassen, and Z. Wang, “Perceptual evaluation of multi-exposure image fusion algorithms,” in Proc. 6th Int. Workshop Quality Multimedia Exper., 2014, pp. 27–28.
[61]
. (2015). Commercially-Available HDR Processing Software . {Online}. Available: http://www.hdrsoft.com/
[62]
P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proc. Annu. Conf. Comput. Graph. Interact. Techn., 1997, pp. 369–378.
[63]
D. Lischinski, Z. Farbman, M. Uyttendaele, and R. Szeliski, “Interactive local adjustment of tonal values,” ACM Trans. Graph., vol. Volume 25, no. Issue 3, pp. 646–653, 2006.
[64]
J. Zheng, Z. Li, Z. Zhu, S. Wu, and S. Rahardja, “Hybrid patching for a sequence of differently exposed images with moving objects,” IEEE Trans. Image Process., vol. Volume 22, no. Issue 12, pp. 5190–5201, 2013.
[65]
A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process., vol. Volume 13, no. Issue 9, pp. 1200–1212, 2004.
[66]
S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” IEEE Signal Process. Lett., vol. Volume 22, no. Issue 12, pp. 2387–2390, 2015.
[67]
J. Wang, S. Wang, K. Ma, and Z. Wang, “Perceptual depth quality in distorted stereoscopic images,” IEEE Trans. Image Process., vol. Volume 26, no. Issue 3, pp. 1202–1215, 2017.

Cited By

View all
  • (2025)From Dynamic to Static: Stepwisely Generate HDR Image for Ghost RemovalIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.346725935:2(1409-1421)Online publication date: 1-Feb-2025
  • (2025)Multi exposure fusion for high dynamic range imaging via multi-channel gradient tensorDigital Signal Processing10.1016/j.dsp.2024.104821156:PBOnline publication date: 1-Jan-2025
  • (2024)Image fusion via vision-language modelProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694583(60749-60765)Online publication date: 21-Jul-2024
  • Show More Cited By
  1. Robust Multi-Exposure Image Fusion: A Structural Patch Decomposition Approach

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image IEEE Transactions on Image Processing
    IEEE Transactions on Image Processing  Volume 26, Issue 5
    May 2017
    492 pages

    Publisher

    IEEE Press

    Publication History

    Published: 01 May 2017

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 07 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)From Dynamic to Static: Stepwisely Generate HDR Image for Ghost RemovalIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.346725935:2(1409-1421)Online publication date: 1-Feb-2025
    • (2025)Multi exposure fusion for high dynamic range imaging via multi-channel gradient tensorDigital Signal Processing10.1016/j.dsp.2024.104821156:PBOnline publication date: 1-Jan-2025
    • (2024)Image fusion via vision-language modelProceedings of the 41st International Conference on Machine Learning10.5555/3692070.3694583(60749-60765)Online publication date: 21-Jul-2024
    • (2024)Hybrid-supervised dual-searchProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v38i6.28413(5985-5993)Online publication date: 20-Feb-2024
    • (2024)Exposure Completing for Temporally Consistent Neural High Dynamic Range Video RenderingProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680935(10027-10035)Online publication date: 28-Oct-2024
    • (2024)OFPF-MEF: An Optical Flow Guided Dynamic Multi-Exposure Image Fusion Network With Progressive Frequencies LearningIEEE Transactions on Multimedia10.1109/TMM.2024.337988326(8581-8595)Online publication date: 1-Jan-2024
    • (2024)MERF: A Practical HDR-Like Image Generator via Mutual-Guided Learning Between Multi-Exposure Registration and FusionIEEE Transactions on Image Processing10.1109/TIP.2024.337817633(2361-2376)Online publication date: 21-Mar-2024
    • (2024)Searching a Compact Architecture for Robust Multi-Exposure Image FusionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.335193334:7(6224-6237)Online publication date: 1-Jul-2024
    • (2024)Robust HDR reconstruction using 3D patch based on two-scale decompositionSignal Processing10.1016/j.sigpro.2024.109384219:COnline publication date: 25-Jun-2024
    • (2024)Exposure difference network for low-light image enhancementPattern Recognition10.1016/j.patcog.2024.110796156:COnline publication date: 18-Nov-2024
    • Show More Cited By

    View Options

    View options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media