Abstract
Improving the performance of the patented image retrieval system is of great significance in the intellectual property protection. The design patent image has a large amount of data, and how to quickly complete the retrieval is part of the main research issues for the design patent retrieval system. Classification is an effective way to improve the retrieval speed, so some methods of image classification have been proposed before. However, image classification cannot achieve high-level semantic classification. Thus the speed of improvement is very limited. In order to realize the classification effect of high-level semantics, in this paper, we propose a method that uses the image caption model-based to realize the automatic description generation of the design patent image. Experiments show that our method has better classification accuracy and better semantic classification performance than previous image classification methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Fang, L., Lerner, J., Wu, C.: Intellectual property rights protection, ownership, and innovation: evidence from China. Rev. Financ. Stud. 30(7), 2446–2447 (2017)
Shalaby, W., Zadrozny, W.: Patent retrieval: a literature review. Knowl. Inf. Syst. 1–30 (2017)
Vrochidis, S., Papadopoulos, S., Moumtzidou, A.: Towards content-based patent image retrieval: a framework perspective. World Patent Inf. 32(2), 94–106 (2010)
Rehman, M., Iqbal, M., Sharif, M.: Content based image retrieval: survey. World Appl. Sci. J. 19(3), 404–412 (2012)
Wan, J., Wang, D., Hoi, S.C.H.: Deep learning for content-based image retrieval: a comprehensive study. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 157–166. ACM, New York (2014)
Csurka, G.: Document image classification, with a specific view on applications of patent images. In: Lupu, M., Mayer, K., Kando, N., Trippe, A. (eds.) Current Challenges in Patent Information Retrieval. TIRS, vol. 37, pp. 325–350. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-53817-3_12
Xuming, L., Qingyun, D., Jiangzhong, C., et al.: Design patent image retrieval system based on semantic classification. Comput. Eng. Appl. 48(16), 202–206 (2012)
Senhong, W.: Research on classification methods of design patent image. Guangdong University of Technology, Guangzhou, Guangdong (2013)
Ni, H., Guo, Z., Huang, B.: Patent image classification using local-constrained linear coding and spatial pyramid matching. In: 2015 International Conference on Service Science. IEEE, Weihai, China (2015)
Vrochidis, S., Moumtzidou, A., Kompatsiaris, I.: Enhancing patent search with content-based image retrieval. In: Paltoglou, G., Loizides, F., Hansen, P. (eds.) Professional Search in the Modern World. LNCS, vol. 8830, pp. 250–273. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12511-4_12
Hossain, M.D., Sohel, F., Shiratuddin, M.F., et al.: A comprehensive survey of deep learning for image captioning. ACM Comput. Surv. CSUR 51(6), 118 (2019)
Peng, Y., Liu, X., Wang, W., Zhao, X., Wei, M.: Image caption model of double LSTM with scene factors. Image Vis. Comput. 86, 38–44 (2019)
Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: The IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Boston, USA, pp. 3156–3164 (2015)
Xu, K., Ba, J., Kiros, R., Courville, A., et al.: Show, attend and tell: neural image caption generation with visual attention. In: International Conference on Machine Learning 2015, ICML, Lille, France, pp. 2048–2057 (2015)
Lu, J., Xiong, C., Parikh, D., et al.: Knowing when to look: adaptive attention via a visual sentinel for image captioning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Honolulu, Hawaii, pp. 375–383 (2017)
Anderson, P., et al.: Bottom-up and top-down attention for image captioning and visual question answering. In: 2018 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 6077–6086 (2018)
Gao, L., Li, X., Song, J., et al.: Hierarchical LSTMs with adaptive attention for visual captioning. IEEE Trans. Pattern Anal. Mach. Intell. (2019)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)
Sundermeyer, M., Ney, H., Schlüter, R.: From feedforward to recurrent LSTM neural networks for language modeling. IEEE/ACM Trans. Audio Speech Lang. Process. 23(3), 517–529 (2015)
Song, S., Huang, H., Ruan, T.: Abstractive text summarization using LSTM-CNN based deep learning. Multimed. Tools Appl. 78(1), 857–875 (2019)
Szegedy, C., Vanhoucke, V., Ioffe, S., et al.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR, LasVegas, Nevada, pp. 2818–2826 (2016)
Microsoft COCO caption Evaluation. https://github.com/tylin/coco-caption. 17 Mar 2015
Hodosh, M., Young, P., Hockenmaier, J.: Framing image description as a ranking task: data, models and evaluation metrics. J. Artif. Intell. Res. 47, 853–899 (2013)
Zhang, A., Sun, G., Ren, J.: A dynamic neighborhood learning-based gravitational search algorithm. IEEE Trans. Cybern. 48(1), 436–447 (2016)
Zheng, J., Liu, Y., Ren, J., et al.: Fusion of block and keypoints based approaches for effective copy-move image forgery detection. Multidimension. Syst. Signal Process. 27(4), 989–1005 (2016)
Yan, Y., Ren, J., Sun, G., et al.: Unsupervised image saliency detection with Gestalt-laws guided optimization and visual attention based refinement. Pattern Recogn. 79, 65–78 (2018)
Sun, M., Zhang, D., Wang, Z., et al.: Monte Carlo convex hull model for classification of traditional Chinese paintings. Neurocomputing 171, 788–797 (2016)
Ren, J., Wang, D.: Effective recognition of MCCs in mammograms using an improved neural classifier. Eng. Appl. Artif. Intell. 24(4), 638–645 (2011)
Yan, Y., Ren, J., Zhao, H., et al.: Cognitive fusion of thermal and visible imagery for effective detection and tracking of pedestrians in videos. Cogn. Comput. 10(1), 94–104 (2018)
Ren, J., Vlachos, T.: Efficient detection of temporally impulsive dirt impairments in archived films. Signal Process. 87(3), 541–551 (2007)
Zhou, Y., et al.: Hierarchical visual perception and two-dimensional compressive sensing for effective content-based color image retrieval. Cogn. Comput. 8(5), 877–889 (2016)
Ren, J., et al.: Multi-camera video surveillance for real-time analysis and reconstruction of soccer games. Mach. Vis. Appl. 21(6), 855–863 (2010)
Acknowledgements
This work was supported by Research on Optimization Theory and Key Technology of Intelligent Search for Design Patent (1741333) Design Patent Image Retrieval Method and Application (572020144), and Guangdong Provincial Key Laboratory Project (2018B030322016).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, H., Dai, Q., Li, Y., Zhang, C., Yi, S., Yuan, T. (2020). The Design Patent Images Classification Based on Image Caption Model. In: Ren, J., et al. Advances in Brain Inspired Cognitive Systems. BICS 2019. Lecture Notes in Computer Science(), vol 11691. Springer, Cham. https://doi.org/10.1007/978-3-030-39431-8_34
Download citation
DOI: https://doi.org/10.1007/978-3-030-39431-8_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-39430-1
Online ISBN: 978-3-030-39431-8
eBook Packages: Computer ScienceComputer Science (R0)