Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Detail-Enhanced Multi-Scale Exposure Fusion in YUV Color Space

Published: 01 August 2020 Publication History

Abstract

It is recognized that existing multi-scale exposure fusion algorithms can be improved using edge-preserving smoothing techniques. However, the complexity of edge-preserving smoothing-based multi-scale exposure fusion is an issue for mobile devices. In this paper, a simpler multi-scale exposure fusion algorithm is designed in YUV color space. The proposed algorithm can preserve details in the brightest and darkest regions of a high dynamic range (HDR) scene and the edge-preserving smoothing-based multi-scale exposure fusion algorithm while avoiding color distortion from appearing in the fused image. The complexity of the proposed algorithm is about half of the edge-preserving smoothing-based multi-scale exposure fusion algorithm. The proposed algorithm is thus friendlier to the smartphones than the edge-preserving smoothing-based multi-scale exposure fusion algorithm. In addition, a simple detail-enhancement component is proposed to enhance fine details of fused images. The experimental results show that the proposed component can be adopted to produce an enhanced image with visibly enhanced fine details and a higher MEF-SSIM value. This is impossible for existing detail enhancement components. Clearly, the component is attractive for PC-based applications.

References

[1]
P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proc. 24th Annu. Conf. Comput. Graph. Interact. Techn., Aug. 1997, pp. 369–378.
[2]
F. Durand and J. Dorsey, “Fast bilateral filtering for the display of high-dynamic-range images,” in Proc. 29th Annu. Conf. Comput. Graph. Interact. Techn., San Antonio, TX, USA, Jul. 2002, pp. 257–266.
[3]
Z. Farbman, R. Fattal, D. Lischinshi, and R. Szeliski, “Edge-preserving decompostions for multi-scale tone and details manipulation,” ACM Trans. Graph., vol. 27, pp. 249–256, Aug. 2008.
[4]
Z. Li and J. Zheng, “Visual-salience-based tone mapping for high dynamic range images,” IEEE Trans. Ind. Electron., vol. 61, no. 12, pp. 7076–7082, Dec. 2014.
[5]
G. Eilertsen, R. Wanat, R. K. Mantiuk, and J. Unger, “Evaluation of tone mapping operators for HDR video,” Comput. Graph. Forum, vol. 32, no. 7, pp. 275–284, 2013.
[6]
A. Abd-el-kader, H. E.-D. Moustafa, and S. Rehan, “Performance measures for image fusion based on wavelet transform and curvelet transform,” in Proc. 28th Nat. Radio Sci. Conf. (NRSC), Apr. 2011, pp. 1–7.
[7]
J. Xu, Y. Huang, and J. Wang, “Multi-exposure images of wavelet transform fusion,” Proc. SPIE, vol. 8878, 2013, Art. no.
[8]
M. Li and Y. Dong, “Review of image fusion algorithm based on multiscale decomposition,” in Proc. Int. Conf. Mech. Sci., Electr. Eng. Comput. (MEC), Dec. 2013, pp. 1422–1425.
[9]
I. Merianos and N. Mitianoudis, “A hybrid multiple exposure image fusion approach for HDR image synthesis,” in Proc. IEEE Int. Conf. Imag. Syst. Techn. (IST), Oct. 2016, pp. 222–226.
[10]
M. Nejati, M. Karimi, S. M. R. Soroushmehr, N. Karimi, S. Samavi, and K. Najarian, “Fast exposure fusion using exposedness function,” in Proc. IEEE Int. Conf. Image Process., Sep. 2017, pp. 2234–2238.
[11]
T. Kartalov and Z. Ivanovski, “High quality exposure fusion for mobile platforms,” in Proc. IEEE Int. Conf. Smart Technol. (EUROCON), Jul. 2017, pp. 297–300.
[12]
T. Mertens, J. Kautz, and F. Van Reeth, “Exposure fusion,” in Proc. 15th Pacific Conf. Comput. Graph. Appl., Oct. 2007, pp. 382–390.
[13]
A. A. Goshtasby and S. Nikolov, “Guest editorial: Image fusion: Advances in the state of the art,” Inf. Fusion, vol. 8, no. 2, pp. 114–118, Apr. 2007.
[14]
P. Burt and E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun., vol. COM-31, no. 4, pp. 532–540, Apr. 1983.
[15]
K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 3345–3356, Nov. 2015.
[16]
C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” in Proc. 6th IEEE Int. Conf. Comput. Vis., Jan. 1998, pp. 839–846.
[17]
K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp. 1397–1409, Jun. 2013.
[18]
Z. Li, J. Zheng, Z. Zhu, W. Yao, and S. Wu, “Weighted guided image filtering,” IEEE Trans. Image Process., vol. 24, no. 1, pp. 120–129, Jan. 2015.
[19]
F. Kou, W. Chen, C. Wen, and Z. Li, “Gradient domain guided image filtering,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 4528–4539, Nov. 2015.
[20]
S. Li, X. Kang, and J. Hu, “Image fusion with guided filtering,” IEEE Trans. Image Process., vol. 22, no. 7, pp. 2864–2875, Jul. 2013.
[21]
Z. Li, Z. Wei, C. Wen, and J. Zheng, “Detail-enhanced multi-scale exposure fusion,” IEEE Trans. Image Process., vol. 26, no. 3, pp. 1243–1252, Mar. 2017.
[22]
F. Kou, Z. Li, C. Wen, and W. Chen, “Multi-scale exposure fusion via gradient domain guided image filtering,” in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Jul. 2017, pp. 1105–1110.
[23]
J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multi-exposure images,” IEEE Trans. Image Process., vol. 27, no. 4, pp. 2049–2062, Apr. 2018.
[24]
Y. Liu, D. Wang, J. Zhang, and X. Chen, “A fast fusion method for multi-exposure image in YUV color space,” in Proc. IEEE 3rd Adv. Inf. Technol., Electron. Autom. Control Conf. (IAEAC), Oct. 2018, pp. 1685–1689.
[25]
C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and A. C. Bovik, “Single-scale fusion: An effective approach to merging images,” IEEE Trans. Image Process., vol. 26, no. 1, pp. 65–78, Jan. 2017.
[26]
R. Shen, I. Cheng, J. Shi, and A. Basu, “Generalized random walks for fusion of multi-exposure images,” IEEE Trans. Image Process., vol. 20, no. 12, pp. 3634–3646, Dec. 2011.
[27]
M. Song, D. Tao, C. Chen, J. Bu, J. Luo, and C. Zhang, “Probabilistic exposure fusion,” IEEE Trans. Image Process., vol. 21, no. 1, pp. 341–357, Jan. 2012.
[28]
Z. G. Li, J. H. Zheng, and S. Rahardja, “Detail-enhanced exposure fusion,” IEEE Trans. Image Process., vol. 21, no. 11, pp. 4672–4676, Nov. 2012.
[29]
S. Di Zenzo, “A note on the gradient of a multi-image,” Comput. Vis., Graph. Image Process., vol. 33, no. 1, pp. 116–125, Jan. 1986.
[30]
F. Kou, W. Chen, X. Wu, and Z. Li, “Intelligent detail enhancement for differently exposed images,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Sep. 2017, pp. 3185–3189.
[31]
F. Kou, Z. Li, C. Wen, and W. Chen, “Edge-preserving smoothing pyramid based multi-scale exposure fusion,” J. Vis. Commun. Image Represent., vol. 53, pp. 235–244, May 2018.
[32]
Q. Wang, W. Chen, X. Wu, and Z. Li, “Detail preserving multi-scale exposure fusion,” in Proc. 25th IEEE Int. Conf. Image Process. (ICIP), Oct. 2018, pp. 1713–1717.
[33]
F. Kou, Z. Wei, W. Chen, X. Wu, C. Wen, and Z. Li, “Intelligent detail enhancement for exposure fusion,” IEEE Trans. Multimedia, vol. 20, no. 2, pp. 484–495, Feb. 2018.
[34]
C. F. Hall and E. L. Hall, “A nonlinear model for the spatial characteristics of the human visual system,” IEEE Trans. Syst., Man, Cybern., Syst., vol. SMC-7, no. 3, pp. 161–170, Mar. 1977.
[35]
D. Min, S. Choi, J. Lu, B. Ham, K. Sohn, and M. Do, “Fast global image smoothing based on weighted least squares,” IEEE Trans. Image Process., vol. 23, no. 12, pp. 5638–5653, Dec. 2014.

Cited By

View all

Index Terms

  1. Detail-Enhanced Multi-Scale Exposure Fusion in YUV Color Space
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image IEEE Transactions on Circuits and Systems for Video Technology
      IEEE Transactions on Circuits and Systems for Video Technology  Volume 30, Issue 8
      Aug. 2020
      499 pages

      Publisher

      IEEE Press

      Publication History

      Published: 01 August 2020

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 14 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)OFPF-MEF: An Optical Flow Guided Dynamic Multi-Exposure Image Fusion Network With Progressive Frequencies LearningIEEE Transactions on Multimedia10.1109/TMM.2024.337988326(8581-8595)Online publication date: 1-Jan-2024
      • (2024)Searching a Compact Architecture for Robust Multi-Exposure Image FusionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2024.335193334:7(6224-6237)Online publication date: 1-Jul-2024
      • (2024)CurveMEFNeurocomputing10.1016/j.neucom.2024.127915596:COnline publication date: 1-Sep-2024
      • (2024)RDGMEF: a multi-exposure image fusion framework based on Retinex decompostion and guided filterNeural Computing and Applications10.1007/s00521-024-09779-836:20(12083-12102)Online publication date: 1-Jul-2024
      • (2023)Deep$\mathrm {M^{2}}$M2CDL: Deep Multi-Scale Multi-Modal Convolutional Dictionary Learning NetworkIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2023.333462446:5(2770-2787)Online publication date: 20-Nov-2023
      • (2023)Deep Multi-Exposure Image Fusion for Dynamic ScenesIEEE Transactions on Image Processing10.1109/TIP.2023.331512332(5310-5325)Online publication date: 1-Jan-2023
      • (2023)Multi-Exposure Image Fusion via Deformable Self-AttentionIEEE Transactions on Image Processing10.1109/TIP.2023.324282432(1529-1540)Online publication date: 1-Jan-2023
      • (2022)DSRGAN: Detail Prior-Assisted Perceptual Single Image Super-Resolution via Generative Adversarial NetworksIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2022.318843332:11(7418-7431)Online publication date: 1-Nov-2022
      • (2022)Attention-Guided Global-Local Adversarial Learning for Detail-Preserving Multi-Exposure Image FusionIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2022.314445532:8(5026-5040)Online publication date: 1-Aug-2022
      • (2022)Spatial Temporal Video Enhancement Using Alternating ExposuresIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2021.313533732:8(4912-4926)Online publication date: 1-Aug-2022

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media