Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Spatial and contextual aware network based on multi-resolution for human pose estimation

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Aiming at capturing high-resolution spatial information and rich contextual information for accurate positioning and inference of keypoints in the task of human pose estimation, a Spatial and Contextual Aware Network (SCANet) based on multi-resolution is proposed. The network is based on HRNet and extends it with three effective modules, namely Spatial Self-Attention Module (SSAM), Information Supplement Module (ISM) and Detail Enhancement Module (DEM). The SSAM is used to provide global dependency for local features by establishing spatial correlation between locations in feature maps. ISM is proposed to further enrich spatial information and refine local representation by skip connection and dilated convolution. DEM is designed to generate high-resolution features and compensate detail information for more precise prediction. The proposed method is better than most of the state-of-the-art methods, and experiments on two keypoint benchmarks, MPII and COCO, validating the effectiveness of the model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Barmpoutis, A.: Tensor body: real-time reconstruction of the human body and avatar synthesis from RGB-d. IEEE Trans. Cybern. 43(5), 1347–1356 (2013)

    Article  Google Scholar 

  2. Bin, Y., Cao, X., Chen, X., Ge, Y., Tai, Y., Wang, C., Li, J., Huang, F., Gao, C., Sang, N.: Adversarial semantic data augmentation for human pose estimation. In: European Conference on Computer Vision, pp. 606–622. Springer (2020)

  3. Carreira, J., Agrawal, P., Fragkiadaki, K., Malik, J.: Human pose estimation with iterative error feedback. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 4733–4742. IEEE (2016)

  4. Chen, X., Yuille, A.L.: Articulated pose estimation by a graphical model with image dependent pairwise relations. In: Advances in Neural Information Processing Systems, pp. 1736–1744 (2014)

  5. Chen, Y., Wang, Z., Peng, Y., Zhang, Z., Yu, G., Sun, J.: Cascaded pyramid network for multi-person pose estimation. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 7103–7112. IEEE (2018)

  6. Cheng, B., Xiao, B., Wang, J., Shi, H., Huang, T.S., Zhang, L.: Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5386–5395 (2020)

  7. Chu, X., Yang, W., Ouyang, W., Ma, C., Yuille, A.L., Wang, X.: Multi-context attention for human pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1831–1840. IEEE (2017)

  8. Fang, H.S., Xie, S., Tai, Y.W., Lu, C.: Rmpe: Regional multi-person pose estimation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2353–2362. IEEE (2017)

  9. Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3146–3154. IEEE (2019)

  10. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323. JMLR (2011)

  11. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp. 2980–2988 (2017)

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE (2016)

  13. Huang, C.L., Chung, C.Y.: A real-time model-based human motion tracking and analysis for human-computer interface systems. EURASIP J. Adv. Signal Process. 2004(11), 1648–1662 (2004)

    Article  Google Scholar 

  14. Huo, Z., Jin, H., Qiao, Y., Luo, F.: Deep high-resolution network with double attention residual blocks for human pose estimation. IEEE Access 8, 224947–224957 (2020)

    Article  Google Scholar 

  15. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift 37, 448–456 (2015)

  16. Jiang, C., Huang, K., Zhang, S., Wang, X., Xiao, J.: Pay attention selectively and comprehensively: Pyramid gating network for human pose estimation without pre-training. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 2364–2371 (2020)

  17. Liang, S., Chu, G., Xie, C., Wang: Joint relation based human pose estimation. The Visual Computer pp. 1–13 (2021)

  18. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European Conference on Computer Vision, pp. 740–755. Springer (2014)

  19. Liu, S., Huang, D., et al.: Receptive field block net for accurate and fast object detection. In: Proceedings of the European Conference on Computer Vision, pp. 404–419. Springer (2018)

  20. Newell, A., Huang, Z., Deng, J.: Associative embedding: end-to-end learning for joint detection and grouping, pp. 2277–2287 (2017)

  21. Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: European Conference on Computer Vision, pp. 483–499. Springer (2016)

  22. Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C., Murphy, K.: Towards accurate multi-person pose estimation in the wild. pp. 3711–3719 (2017)

  23. Papandreou, G., Zhu, T., Kanazawa, N., Toshev, A., Tompson, J., Bregler, C., Murphy, K.: Towards accurate multi-person pose estimation in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3711–3719. IEEE (2017)

  24. Su, K., Yu, D., Xu, Z., Geng, X., Wang, C.: Multi-person pose estimation with enhanced channel-wise and spatial information. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5674–5682. IEEE (2019)

  25. Sun, K., Xiao, B., Liu, D., Wang, J.: Deep high-resolution representation learning for human pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5693–5703. IEEE (2019)

  26. Sun, X., Xiao, B., Wei, F., Liang, S., Wei, Y.: Integral human pose regression. In: Proceedings of the European Conference on Computer Vision, pp. 536–553. Springer (2018)

  27. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9. IEEE (2015)

  28. Tang, W., Yu, P., Wu, Y.: Deeply learned compositional models for human pose estimation. In: Proceedings of the European Conference on Computer Vision, pp. 197–214. Springer (2018)

  29. Toshev, A., Szegedy, C.: Deeppose: Human pose estimation via deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1653–1660. IEEE (2014)

  30. Vidanpathirana, M., Sudasingha, I., Vidanapathirana, J., Kanchana, P., Perera, I.: Tracking and frame-rate enhancement for real-time 2d human pose estimation. Vis. Comput. 36(7), 1501–1519 (2020)

    Article  Google Scholar 

  31. Wang, P., Chen, P., Yuan, Y., Liu, D., Huang, Z., Hou, X., Cottrell, G.: Understanding convolution for semantic segmentation. In: Winter Conference on Applications of Computer Vision, pp. 1451–1460. IEEE (2018)

  32. Wang, Z., Tang, Z., Li, Y., Chen, Y., Ling, H., Lin, W., et al.: Gsto: Gated scale-transfer operation for multi-scale feature learning in pixel labeling. arXiv preprint arXiv:2005.13363 (2020)

  33. Wei, S.E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 4724–4732. IEEE (2016)

  34. Wei Tang and Ying Wu: Does Learning Specific Features for Related Parts Help Human Pose Estimation? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1107–1116 (2019)

  35. Wu, X., Liang, W., Jia, Y.: Action recognition feedback-based framework for human pose reconstruction from monocular images. Pattern Recognit. Lett. 30(12)(12), 1077–1085 (2009)

  36. Xiao, B., Wu, H., Wei, Y.: Simple baselines for human pose estimation and tracking. In: Proceedings of the European Conference on Computer Vision, pp. 466–481. Springer (2018)

  37. Xu, X., Zou, Q., Lin, X.: Multi-person pose estimation with enhanced feature aggregation and selection. arXiv preprint arXiv:2003.10238 (2020)

  38. Yang, Q., Shi, W., Chen, J., Tang, Y.: Localization of hard joints in human pose estimation based on residual down-sampling and attention mechanism. The Visual Computer pp. 1–13 (2021)

  39. Yang, W., Li, S., Ouyang, W., Li, H., Wang, X.: Learning feature pyramids for human pose estimation. In: proceedings of the IEEE International Conference on Computer Vision, pp. 1290–1299. IEEE (2017)

  40. Yu, C., Wang, J., Peng, C., Gao, C., Yu, G., Sang, N.: Bisenet: Bilateral segmentation network for real-time semantic segmentation. In: Proceedings of the European conference on computer vision, pp. 334–349. Springer (2018)

  41. Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations, pp. 1–9 (2016)

  42. Zhang, F., Zhu, X., Dai, H., Ye, M., Zhu, C.: Distribution-aware coordinate representation for human pose estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7091–7100 (2020)

  43. Zhang, K., He, P., Yao, P., Chen, G., Yang, C., Li, H., Fu, L., Zheng, T.: Dnanet: De-normalized attention based multi-resolution network for human pose estimation. arXiv preprint arXiv:1909.05090 (2019)

  44. Zhou, L., Chen, Y., Gao, Y., Wang, J., Lu, H.: Occlusion-aware siamese network for human pose estimation. In: European Conference on Computer Vision, pp. 396–412. Springer (2020)

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (Grant Nos. 61573168 and 62173160)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ying Chen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, Q., Chen, Y. Spatial and contextual aware network based on multi-resolution for human pose estimation. Vis Comput 39, 651–662 (2023). https://doi.org/10.1007/s00371-021-02364-3

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02364-3

Keywords

Navigation