Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Pseudo Light Field Image and 4D Wavelet-Transform-Based Reduced-Reference Light Field Image Quality Assessment

Published: 01 January 2024 Publication History

Abstract

Reduced-reference light field image (LFI) quality assessment (RR LFIQA) automatically assesses image quality with only partial information about the reference LFI is available. Existing RR LFIQA has difficulty extracting effective RR information and perceptual features to represent the LFI quality. In this article, we propose an RR LFIQA model based on pseudo LFI (PLFI) and four-dimensional (4D) wavelet transform. To extract RR information related to LFI perceptual quality, a PLFI is created as the RR information of the LFI using a view synthesis algorithm. Considering that the high-dimensional characteristics of the PLFI, 4D wavelet transform is used to decompose the original and distorted PLFIs. The 4D wavelet transform essentially performs a continuous 1D wavelet transform for the 4D signal to enable the local 4D structure of the PLFIs to be characterized effectively in the 4D wavelet domain. A novel spatial-angular weighting strategy is proposed to describe the importance of each location for quality evaluation, to further improve the performance of the proposed method. Experimental results on four benchmark datasets show that the proposed model performs better than the representative 2DIQA and LFIQA models.

References

[1]
P. Chandramouli, K. V. Gandikota, A. Goerlitz, A. Kolb, and M. Moeller, “A generative model for generic light field reconstruction,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 4, pp. 1712–1724, Apr. 2022.
[2]
J. Jin et al., “Deep coarse-to-fine dense light field reconstruction with flexible sampling and geometry-aware fusion,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 4, pp. 1819–1836, Apr. 2022.
[3]
Y. Chen, G. Jiang, Z. Jiang, M. Yu, and Y.-S. Ho, “Deep light field super-resolution using frequency domain analysis and semantic prior,” IEEE Trans. Multimedia, vol. 24, pp. 3722–3737, 2022.
[4]
K. Han, W. Xiang, E. Wang, and T. Huang, “A novel occlusion-aware vote cost for light field depth estimation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 11, pp. 8022–8035, Nov. 2022.
[5]
H. Wang et al., “Three-dimensional reconstruction of dilute bubbly flow field with light-field images based on deep learning method,” IEEE Sensors J., vol. 21, no. 12, pp. 13417–13429, Jun. 2021.
[6]
X. Lv, X. Wang, Q. Wang, and J. Yu, “4D light field segmentation from light field super-pixel hypergraph representation,” IEEE Trans. Visual. Comput. Graph., vol. 27, no. 9, pp. 3597–3610, Sep. 2021.
[7]
Q. Li and Z. Wang, “Reduced-reference image quality assessment using divisive normalization-based image representation,” IEEE J. Sel. Topics Signal Process., vol. 3, no. 2, pp. 202–211, Apr. 2009.
[8]
X. Liu et al., “Subjective and objective video quality assessment of 3D synthesized views with texture/depth compression distortion,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 4847–4861, Dec. 2015.
[9]
G. Yue, C. Hou, K. Gu, T. Zhou, and G. Zhai, “Combining local and global measures for DIBR-synthesized image quality evaluation,” IEEE Trans. Image Process., vol. 28, no. 4, pp. 2075–2088, Apr. 2019.
[10]
H. Huang et al., “Light field image quality assessment: An overview,” in Proc. IEEE Conf. Multimedia Inf. Process. Retrieval, 2020, pp. 348–353.
[11]
A. Rehman and Z. Wang, “Reduced-reference image quality assessment by structural similarity estimation,” IEEE Trans. Image Process., vol. 21, no. 8, pp. 3378–3389, Aug. 2012.
[12]
P. Paudyal, F. Battisti, and M. Carli, “Reduced reference quality assessment of light field images,” IEEE Trans. Broadcast., vol. 65, no. 1, pp. 152–165, Mar. 2019.
[13]
W. Zhou, L. Shi, Z. Chen, and J. Zhang, “Tensor oriented no-reference light field image quality assessment,” IEEE Trans. Image Process., vol. 29, pp. 4070–4084, 2020.
[14]
G. Alves et al., “A study on the 4D sparsity of JPEG pleno light fields using the discrete cosine transform,” in Proc. IEEE 25th Int. Conf. Image Process., 2018, pp. 1148–1152.
[15]
L. Shi, W. Zhou, Z. Chen, and J. Zhang, “No-reference light field image quality assessment based on spatial-angular measurement,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 11, pp. 4114–4128, Nov. 2020.
[16]
Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
[17]
Z. Wang and Q. Li, “Information content weighting for perceptual image quality assessment,” IEEE Trans. Image Process., vol. 20, no. 5, pp. 1185–1198, May 2011.
[18]
Z. Ni et al., “A Gabor feature-based quality assessment model for the screen content images,” IEEE Trans. Image Process., vol. 27, no. 9, pp. 4516–4528, Sep. 2018.
[19]
P. Cheraaqee, Z. Maviz, A. Mansouri, and A. Mahmoudi-Aznaveh, “Quality assessment of screen content images in wavelet domain,” IEEE Trans. Circuits Syst. for Video Technol., vol. 32, no. 2, pp. 566–578, Feb. 2022.
[20]
Y. Liu, Z. Ni, S. Wang, H. Wang, and S. Kwong, “High dynamic range image quality assessment based on frequency disparity,” IEEE Trans. Circuits Syst. Video Technol..
[21]
J. Wu, W. Lin, G. Shi, L. Li, and Y. Fang, “Orientation selectivity based visual pattern for reduced-reference image quality assessment,” Inf. Sci., vol. 351, pp. 18–29, 2016.
[22]
M. Liu, K. Gu, G. Zhai, P. L. Callet, and W. Zhang, “Perceptual reduced-reference visual quality assessment for contrast alteration,” IEEE Trans. Broadcast., vol. 63, no. 1, pp. 71–81, Mar. 2017.
[23]
W. Zhu et al., “Multi-channel decomposition in tandem with free-energy principle for reduced-reference image quality assessment,” IEEE Trans. Multimedia, vol. 21, no. 9, pp. 2334–2346, Sep. 2019.
[24]
A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Trans. Image Process., vol. 21, no. 12, pp. 4695–4708, Dec. 2012.
[25]
Q. Li, W. Lin, and Y. Fang, “No-reference quality assessment for multiply-distorted images in gradient domain,” IEEE Signal Process. Lett., vol. 23, no. 4, pp. 541–545, Apr. 2016.
[26]
Y. Fang, J. Yan, L. Li, J. Wu, and W. Lin, “No reference quality assessment for screen content images with both local and global feature representation,” IEEE Trans. Image Process., vol. 27, no. 4, pp. 1600–1610, Apr. 2018.
[27]
J. Yan, Y. Fang, R. Du, Y. Zeng, and Y. Zuo, “No reference quality assessment for 3d synthesized views by local structure variation and global naturalness change,” IEEE Trans. Image Process., vol. 29, pp. 7443–7453, 2020.
[28]
K. Ding, K. Ma, S. Wang, and E. P. Simoncelli, “Image quality assessment: Unifying structure and texture similarity,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 5, pp. 2567–2581, May 2022.
[29]
R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 586–595.
[30]
W. Zhang, K. Ma, G. Zhai, and X. Yang, “Uncertainty-aware blind image quality assessment in the laboratory and wild,” IEEE Trans. Image Process., vol. 30, pp. 3474–3486, 2021.
[31]
S. Su et al., “Blindly assess image quality in the wild guided by a self-adaptive hyper network,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3664–3673.
[32]
M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. 23rd Annu. Conf. Comput. Graph. Interactive Techn., 1996, pp. 31–42.
[33]
S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proc. 23rd Annu. Conf. Comput. Graph. Interactive Techn., 1996, pp. 43–54.
[34]
L. Shi, S. Zhao, W. Zhou, and Z. Chen, “Perceptual evaluation of light field image,” in Proc. IEEE 25th Int. Conf. Image Process., 2018, pp. 41–45.
[35]
G. Wu et al., “Light field image processing: An overview,” IEEE J. Sel. Topics Signal Process., vol. 11, no. 7, pp. 926–954, Oct. 2017.
[36]
Y. Tian et al., “A multi-order derivative feature-based quality assessment model for light field image,” J. Vis. Commun. Image Representation, vol. 57, pp. 212–217, Nov. 2018.
[37]
X. Min et al., “A metric for light field reconstruction, compression, and display quality evaluation,” IEEE Trans. Image Process., vol. 29, pp. 3790–3804, 2020.
[38]
Y. Tian, H. Zeng, J. Hou, J. Chen, and K.-K. Ma, “Light field image quality assessment via the light field coherence,” IEEE Trans. Image Process., vol. 29, pp. 7945–7956, 2020.
[39]
C. Meng, P. An, X. Huang, C. Yang, and D. Liu, “Full reference light field image quality evaluation based on angular-spatial characteristic,” IEEE Signal Process. Lett., vol. 27, pp. 525–529, 2020.
[40]
C. Meng et al., “Objective quality assessment of lenslet light field image based on focus stack,” IEEE Trans. Multimedia, vol. 24, pp. 3193–3207, 2022.
[41]
L. Shi, S. Zhao, and Z. Chen, “BELIF: Blind quality evaluator of light field image with tensor structure variation index,” in Proc. IEEE Int. Conf. Image Process., 2019, pp. 3781–3785.
[42]
J. Xiang et al., “Pseudo video and refocused images-based blind light field image quality assessment,” IEEE Trans. Circuits Syst. for Video Technol., vol. 31, no. 7, pp. 2575–2590, Jul. 2021.
[43]
J. Xiang, G. Jiang, M. Yu, Z. Jiang, and Y.-S. Ho, “No-reference light field image quality assessment using four-dimensional sparse transform,” IEEE Trans. Multimedia, vol. 25, pp. 457–472, 2023.
[44]
J. Flynn, I. Neulander, J. Philbin, and N. Snavely, “Deep stereo: Learning to predict new views from the world's imagery,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 5515–5524.
[45]
E. P. Simoncelli and W. Freeman, “The steerable pyramid: A flexible architecture for multi-scale derivative computation,” in Proc. Int. Conf. Image Process., 1995, vol. 3, pp. 444–447.
[46]
R. Soundararajan and A. C. Bovik, “RRED indices: Reduced reference entropic differencing for image quality assessment,” IEEE Trans. Image Process., vol. 21, no. 2, pp. 517–526, Feb. 2012.
[47]
H. Sheikh and A. Bovik, “Image information and visual quality,” IEEE Trans. Image Process., vol. 15, no. 2, pp. 430–444, Feb. 2006.
[48]
J. Wu et al., “Visual orientation selectivity based structure description,” IEEE Trans. Image Process., vol. 24, no. 11, pp. 4602–4613, Nov. 2015.
[49]
C. C. Chang and C. J. Lin, “LIBSVM: A library for support vector machines,” ACM Trans. Intell. Syst. Technol., vol. 2, no. 3, pp. 1–27, 2011.
[50]
Z. Huang et al., “Reconstruction distortion oriented light field image dataset for visual communication,” in Proc. IEEE Int. Symp. Netw., Comput. Commun., 2019, pp. 1–5.
[51]
I. Viola and T. Ebrahimi, “VALID: Visual quality assessment for light field images dataset,” in Proc. 10th Int. Conf. Qual. Multimedia Experience, 2018, pp. 1–3.
[52]
L. Shan et al., “A no-reference image quality assessment metric by multiple characteristics of light field images,” IEEE Access, vol. 7, pp. 127217–127229, 2019.
[53]
X. Min, K. Gu, G. Zhai, M. Hu, and X. Yang, “Saliency-induced reduced-reference quality index for natural scene and screen content images,” Signal Process., vol. 145, pp. 127–136, 2018.
[54]
J. Wu, Y. Liu, L. Li, and G. Shi, “Attended visual content degradation based reduced reference image quality assessment,” IEEE Access, vol. 6, pp. 12493–12504, 2018.
[55]
S. Wang et al., “Reduced-reference quality assessment of screen content images,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 1, pp. 1–14, Jan. 2018.
[56]
X. Li, Q. Guo, and X. Lu, “Spatiotemporal statistics for video quality assessment,” IEEE Trans. Image Process., vol. 25, no. 7, pp. 3329–3342, Jul. 2016.
[57]
V. Q. E. Group et al., “Final report from the video quality experts group on the validation of objective models of video quality assessment,” in Proc. VQEG Meeting, 2000, pp. 1–129.
[58]
G. Wu, Y. Liu, L. Fang, Q. Dai, and T. Chai, “Light field reconstruction using convolutional network on EPI and extended applications,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 7, pp. 1681–1694, Jul. 2019.
[59]
Y. Wang et al., “Disentangling light fields for super-resolution and disparity estimation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 425–443, Jan. 2023.

Cited By

View all
  • (2025)Spatial-angular features based no-reference light field quality assessmentExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.126061265:COnline publication date: 15-Mar-2025

Index Terms

  1. Pseudo Light Field Image and 4D Wavelet-Transform-Based Reduced-Reference Light Field Image Quality Assessment
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image IEEE Transactions on Multimedia
      IEEE Transactions on Multimedia  Volume 26, Issue
      2024
      11427 pages

      Publisher

      IEEE Press

      Publication History

      Published: 01 January 2024

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 05 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2025)Spatial-angular features based no-reference light field quality assessmentExpert Systems with Applications: An International Journal10.1016/j.eswa.2024.126061265:COnline publication date: 15-Mar-2025

      View Options

      View options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media