Towards robust text-prompted semantic criterion for in-the-wild video quality assessment

H Wu, L Liao, A Wang, C Chen, J Hou, W Sun… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2304.14672, 2023arxiv.org
The proliferation of videos collected during in-the-wild natural settings has pushed the
development of effective Video Quality Assessment (VQA) methodologies. Contemporary
supervised opinion-driven VQA strategies predominantly hinge on training from expensive
human annotations for quality scores, which limited the scale and distribution of VQA
datasets and consequently led to unsatisfactory generalization capacity of methods driven
by these data. On the other hand, although several handcrafted zero-shot quality indices do …
The proliferation of videos collected during in-the-wild natural settings has pushed the development of effective Video Quality Assessment (VQA) methodologies. Contemporary supervised opinion-driven VQA strategies predominantly hinge on training from expensive human annotations for quality scores, which limited the scale and distribution of VQA datasets and consequently led to unsatisfactory generalization capacity of methods driven by these data. On the other hand, although several handcrafted zero-shot quality indices do not require training from human opinions, they are unable to account for the semantics of videos, rendering them ineffective in comprehending complex authentic distortions (e.g., white balance, exposure) and assessing the quality of semantic content within videos. To address these challenges, we introduce the text-prompted Semantic Affinity Quality Index (SAQI) and its localized version (SAQI-Local) using Contrastive Language-Image Pre-training (CLIP) to ascertain the affinity between textual prompts and visual features, facilitating a comprehensive examination of semantic quality concerns without the reliance on human quality annotations. By amalgamating SAQI with existing low-level metrics, we propose the unified Blind Video Quality Index (BVQI) and its improved version, BVQI-Local, which demonstrates unprecedented performance, surpassing existing zero-shot indices by at least 24\% on all datasets. Moreover, we devise an efficient fine-tuning scheme for BVQI-Local that jointly optimizes text prompts and final fusion weights, resulting in state-of-the-art performance and superior generalization ability in comparison to prevalent opinion-driven VQA methods. We conduct comprehensive analyses to investigate different quality concerns of distinct indices, demonstrating the effectiveness and rationality of our design.
arxiv.org