Abstract
The practical application requires a vision-based face detector to work in real-time. The robot application uses the face detection method as the initial process for the face analysis system. During its development, it utilizes an edge device to be used to process sensor information. Jetson Nano is a mini portable computer that is easily synchronized with sensors and actuators. However, traditional detectors can work fast on this device but have low performance for occlusion cases, multiple poses, and small faces. On the other hand, CNN-based detectors that implement deep layers are slow to run on low memory GPU devices. In this work, an efficient real-time face detector using a simple spatial attention module was developed to localize faces rapidly. The proposed architecture consists of the backbone module to efficiently extract features, the light connection module to reduce the size of the detection layer, and multi-scale detection to perform prediction of faces on various scales. As a result, the proposed detector achieves competitive performance from state-of-the-art fast detectors on several benchmark datasets. In addition, this efficient detector can run at 55 frames per second in video graphics array resolution on a Jetson Nano.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Choi, J.Y., Lee, B.: Ensemble of deep convolutional neural networks with gabor face representations for face recognition. IEEE Trans. Image Process. 29, 3270–3281 (2020). https://doi.org/10.1109/TIP.2019.2958404
Putro, M.D., Nguyen, D.-L., Jo, K.-H.: A dual attention module for real-time facial expression recognition. In: IECON 2020 the 46th Annual Conference of the IEEE Industrial Electronics Society, Singapore, pp. 411–416 (2020). https://doi.org/10.1109/IECON43393.2020.9254805
Zhou, Y., Ni, H., Ren, F., Kang, X.: Face and gender recognition system based on convolutional neural networks. In: 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, pp. 1091–1095 (2019). https://doi.org/10.1109/ICMA.2019.8816192
Hoang, V.-T., Huang, D.-S., Jo, K.-H.: 3-D facial landmarks detection for intelligent video systems. IEEE Trans. Industr. Inf. 17(1), 578–586 (2021). https://doi.org/10.1109/TII.2020.2966513
Awais, M., et al.: Real-time surveillance through face recognition using HOG and feedforward neural networks. IEEE Access 7, 121236–121244 (2019). https://doi.org/10.1109/ACCESS.2019.2937810
Putro, M.D., Jo, K.: Real-time face tracking for human-robot interaction. In: 2018 International Conference on Information and Communication Technology Robotics (ICT-ROBOT), Busan, Korea (South), pp. 1–4 (2018). https://doi.org/10.1109/ICT-ROBOT.2018.8549902
Li, X., Yang, Z., Wu, H.: Face detection based on receptive field enhanced multi-task cascaded convolutional neural networks. IEEE Access 8, 174922–174930 (2020). https://doi.org/10.1109/ACCESS.2020.3023782
Paul, V., Michael, J.: Robust real-time face detection. Int. J. Comput. Vision 57(2), 137–154 (2004)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Lei, X., Pan, H., Huang, X.: A dilated CNN model for image classification. IEEE Access 7, 124087–124095 (2019). https://doi.org/10.1109/ACCESS.2019.2927169
Zhang, S., Wang, X., Lei, Z., Li, S.Z.: Faceboxes: a CPU real-time and accurate unconstrained face detector. Neurocomputing 364, 297–309 (2019). ISSN 0925-2312
Putro, M.D., Jo, K.-H.: Fast face-CPU: a real-time fast face detector on CPU using deep learning. In: 2020 IEEE 29th International Symposium on Industrial Electronics (ISIE), Delft, Netherlands, pp. 55–60 (2020). https://doi.org/10.1109/ISIE45063.2020.9152400
He, Y., Xu, D., Wu, L., Jian, M., Xiang, S., Pan, C.: LFFD: A Light and Fast Face Detector for Edge Devices (2019). arXiv:1904.10633
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (Jun 2015). https://doi.org/10.1109/CVPR.2015.7298594
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.: MobileNetV2: inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 4510–4520 (2018). https://doi.org/10.1109/CVPR.2018.00474
Zhang, X., Zhou, X., Lin, M., Sun, J.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 6848–6856 (2018). https://doi.org/10.1109/CVPR.2018.00716
Süzen, A.A., Duman, B., Şen, B.: Benchmark analysis of Jetson TX2, Jetson Nano and Raspberry PI using deep-CNN. In: 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey, pp. 1–5 (2020). https://doi.org/10.1109/HORA49412.2020.9152915
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745
Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.): ECCV 2018. LNCS, vol. 11210. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01231-1
Acknowledgement
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the government (MSIT) (No. 2020R1A2C200897212).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Putro, M.D., Nguyen, DL., Jo, KH. (2021). Efficient Face Detector Using Spatial Attention Module in Real-Time Application on an Edge Device. In: Huang, DS., Jo, KH., Li, J., Gribova, V., Bevilacqua, V. (eds) Intelligent Computing Theories and Application. ICIC 2021. Lecture Notes in Computer Science(), vol 12836. Springer, Cham. https://doi.org/10.1007/978-3-030-84522-3_67
Download citation
DOI: https://doi.org/10.1007/978-3-030-84522-3_67
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-84521-6
Online ISBN: 978-3-030-84522-3
eBook Packages: Computer ScienceComputer Science (R0)