Abstract
We study non-reference image and video quality assessment methods, which are of great importance for computational video editing. The object of our work is image quality assessment (IQA) applicable for fast and robust frame-by-frame multipurpose video quality assessment (VQA) for short videos.
We present a complex framework for assessing the quality of images and videos. The scoring process consists of several parallel steps of metric collection with final score aggregation step. Most of the individual scoring models are based on deep convolutional neural networks (CNN). The framework can be flexibly extended or reduced by adding or removing these steps. Using Deep CNN-Based Blind Image Quality Predictor (DIQA) as a baseline for IQA, we proposed improvements based on two patching strategies, such as uniform patching and object-based patching, and add intelligent pre-training step with distortion classification.
We evaluated our model on three IQA benchmark image datasets (LIVE, TID2008, and TID2013) and manually collected short YouTube videos. We also consider interesting for automated video editing metrics used for video scoring based on the scale of a scene, face presence in frame and compliance of the shot transitions with the shooting rules. The results of this work are applicable to the development of intelligent video and image processing systems.
Ilya Makarov—This research is partially based on the work supported by Samsung Research, Samsung Electronics.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Amel, A.M., Abdessalem, B.A., Abdellatif, M.: Video shot boundary detection using motion activity descriptor. arXiv preprint arXiv:1004.4605 (April 2010). http://arxiv.org/abs/1004.4605
Bosse, S., Maniry, D., Wiegand, T., Samek, W.: A deep neural network for image quality assessment. In: 2016 IEEE International Conference on Image Processing (ICIP). pp. 3773–3777. IEEE (September 2016). https://doi.org/10.1109/ICIP.2016.7533065, http://ieeexplore.ieee.org/document/7533065/
Brown, B.: Cinematography: Theory and Practice: Image Making for Cinematographers and Directors. Taylor & Francis (2016). https://books.google.ru/books?id=GiQlDwAAQBAJ
Cherif, I., Solachidis, V., Pitas, I.: Shot type identification of movie content. In: 2007 9th International Symposium on Signal Processing and Its Applications, pp. 1–4. IEEE (February 2007). https://doi.org/10.1109/ISSPA.2007.4555491, http://ieeexplore.ieee.org/document/4555491/
Ekin, A., Tekalp, A.: Robust Dominant Color Region Detection with Applications to Sports Video Analysis. In: Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429) (2003)
Ferman, A., Tekalp, A.: Two-stage hierarchical video summary extraction to match low-level user browsing preferences. IEEE Trans. Multimedia 5(2), 244–256 (2003). https://doi.org/10.1109/TMM.2003.811617, http://ieeexplore.ieee.org/document/1208494/
Hassanien, A., Elgharib, M., Selim, A., Bae, S.H., Hefeeda, M., Matusik, W.: Large-scale, Fast and Accurate Shot Boundary Detection through Spatio-temporal Convolutional Neural Networks. arXiv preprint arXiv:1705.03281 (May 2017). http://arxiv.org/abs/1705.03281
Kang, L., Ye, P., Li, Y., Doermann, D.: Convolutional Neural Networks for No-Reference Image Quality Assessment. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014)
Kim, J., Nguyen, A., Lee, S.: Deep CNN-based blind image quality predictor. IEEE Trans. Neural Netw. Learn. Syst. 30(1), 11–24 (2019). https://doi.org/10.1109/TNNLS.2018.2829819
Li, Y., et al.: No-reference image quality assessment with shearlet transform and deep neural networks. Neurocomputing 154, 94–109 (2015). https://doi.org/10.1016/j.neucom.2014.12.015, https://linkinghub.elsevier.com/retrieve/pii/S0925231214016798
Liu, L., et al.: Deep learning for generic object detection: a survey. Int. J. Comput. Vis. 128(2), 261–318 (2020). https://doi.org/10.1007/s11263-019-01247-4, http://link.springer.com/10.1007/s11263-019-01247-4
Mittal, A., Moorthy, A.K., Bovik, A.C.: No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 21(12), 4695–4708 (2012). https://doi.org/10.1109/TIP.2012.2214050
Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “Completely Blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2013). https://doi.org/10.1109/LSP.2012.2227726
Moorthy, A.K., Bovik, A.C.: Blind image quality assessment: from natural scene statistics to perceptual quality. IEEE Trans. Image Process. 20(12), 3350–3364 (2011)
Pass, G., Zabih, R., Miller, J.: Comparing images using color coherence vectors. In: Proceedings of the fourth ACM international conference on Multimedia - MULTIMEDIA 1996. pp. 65–73. ACM Press, New York, USA (1996). https://doi.org/10.1145/244130.244148, http://portal.acm.org/citation.cfm?doid=244130.244148
Ponomarenko, N., et al.: Image database TID2013: Peculiarities, results and perspectives. Signal Process. Image Commun. 30, 57–77 (2015). https://doi.org/10.1016/j.image.2014.10.009, https://linkinghub.elsevier.com/retrieve/pii/S0923596514001490
Ponomarenko, N., Lukin, V., Zelensky, A., Egiazarian, K., Carli, M., Battisti, F.: TID2008-a database for evaluation of full-reference visual quality assessment metrics. Adv. Mod. Radioelectronics 10(4), 30–45 (2009)
Somani, R.: AI For Filmmaking: Recognising Shot Types with ResNets (2019). https://rsomani95.github.io/ai-film-1.html
Saad, M.A., Bovik, A.C., Charrier, C.: Blind image quality assessment: a natural scene statistics approach in the DCT domain. IEEE Trans. Image Process. 21(8), 3339–3352 (2012). https://doi.org/10.1109/TIP.2012.2191563
Sheikh, H.R., Wang, Z., Cormack, L., Bovik, A.C.: LIVE image quality assessment database release 2 (2005). http://live.ece.texas.edu/research/quality (2005)
Souček, T., Moravec, J., Lokoč, J.: TransNet: a deep network for fast detection of common shot transitions. arXiv preprint arXiv:1906.03363 (June 2019). http://arxiv.org/abs/1906.03363
Tabii, Y., Djibril, M.O., Hadi, Y., Thami, R.O.H.: A new method for video soccer shot classification. In: VISAPP (1), pp. 221–224 (2007)
Zhang, H., Kankanhalli, A., Smoliar, S.W.: Automatic partitioning of full-motion video. Multimedia Syst. 1(1), 10–28 (1993). https://doi.org/10.1007/BF01210504, http://link.springer.com/10.1007/BF01210504
Zhang, P., Zhou, W., Wu, L., Li, H.: SOM: semantic obviousness metric for image quality assessment. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Lomotin, K., Makarov, I. (2021). Automated Image and Video Quality Assessment for Computational Video Editing. In: van der Aalst, W.M.P., et al. Analysis of Images, Social Networks and Texts. AIST 2020. Lecture Notes in Computer Science(), vol 12602. Springer, Cham. https://doi.org/10.1007/978-3-030-72610-2_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-72610-2_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-72609-6
Online ISBN: 978-3-030-72610-2
eBook Packages: Computer ScienceComputer Science (R0)