Abstract
Facial expression recognition plays an important role in human machine interaction, and thus becomes an important task in cognitive science and artificial intelligence. In vision fields, facial expression recognition aims to identify facial expressions through images or videos, but there is rare work towards real-world applications. In this work, we propose a hardware-friendly quantized separable residual network and developed a real-world facial expression recognition system on a field programming gate array. The proposed network is first trained on devices with graphical processing units, and then quantized to speed up inference. Finally, the quantized algorithm is deployed on a high-performance edge device - Ultra96-V2 field programming gate array board. The complete system involves capturing images, detecting faces, and recognizing expressions. We conduct exhaustive experiments for comparing the performance with various deep learning models and show superior results. The overall system has also demonstrated satisfactory performance on FPGA, and could be considered as an important milestone for facial expression recognition applications in the real world.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Li, S., Deng, W.: Deep facial expression recognition: a survey. IEEE Trans. Affect. Comput. (2020)
Gennari, R., et al.: Children’s emotions and quality of products in participatory game design. Int. J. Hum. Comput. Stud. 101, 45–61 (2017)
Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp. 94–101. IEEE (2010)
Filntisis, P.P., Efthymiou, N., Koutras, P., Potamianos, G., Maragos, P.: Fusing body posture with facial expressions for joint recognition of affect in child-robot interaction. IEEE Robot. Autom. Lett. 4(4), 4011–4018 (2019)
Wang, Y., Ai, H., Wu, B., Huang, C.: Real time facial expression recognition with Adaboost. In: International Conference on Pattern Recognition, vol. 3, pp. 926–929. IEEE (2004)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)
LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)
Han, S., Meng, Z., Khan, A.S., Tong, Y.: Incremental boosting convolutional neural network for facial action unit recognition. arXiv preprint arXiv:1707.05395 (2017)
Fernandez, P.D.M., Pena, F.A.G., Ren, T., et al.: FERAtt: facial expression recognition with attention net. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp. 837–846 (2019)
Balahur, A., Hermida, J.M., Montoyo, A., Muñoz, R.: EmotiNet: a knowledge base for emotion detection in text built on the appraisal theories. In: Muñoz, R., Montoyo, A., Métais, E. (eds.) NLDB 2011. LNCS, vol. 6716, pp. 27–39. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22327-3_4
Phan-Xuan, H., Le-Tien, T., Nguyen-Tan, S.: FPGA platform applied for facial expression recognition system using convolutional neural networks. Procedia Comput. Sci. 151, 651–658 (2019)
Vinh, P.T., Vinh, T.Q.: Facial expression recognition system on SoC FPGA. In: International Symposium on Electrical and Electronics Engineering, pp. 1–4. IEEE (2019)
Viola, P., Jones, M.: Robust real-time object detection. Int. J. Comput. Vis. 4(34–47), 4 (2001)
Wang, W., Chang, F., Zhao, J., Chen, Z.: Automatic facial expression recognition using local binary pattern. In: World Congress on Intelligent Control and Automation, pp. 6375–6378. IEEE (2010)
Yang, J., Zhang, D., Frangi, A.F., et al.: Two-dimensional PCA: a new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(1), 131–137 (2004)
Zhang, F., Zhang, T., Mao, Q., Xu, C.: Joint pose and expression modeling for facial expression recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3359–3368 (2018)
Han, Y., Chen, B., Zhang, X.: Sex difference of saccade patterns in emotional facial expression recognition. In: Sun, F., Liu, H., Hu, D. (eds.) ICCSIP 2016. CCIS, vol. 710, pp. 144–154. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-5230-9_16
Bailey, D.G.: Design for Embedded Image Processing on FPGAs. Wiley, New York (2011)
Kumar, A., Hansson, A., Huisken, J., et al.: An FPGA design flow for reconfigurable network-based multi-processor systems on chip. In: Design, Automation and Test in Europe Conference and Exhibition, pp. 1–6. IEEE (2007)
Kathail, V.: Xilinx vitis unified software platform. In: ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 173–174 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Howard, A.G., Zhu, M., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
Zhu, Y., Bai, L., Peng, W., Zhang, X., Luo, X.: Depthwise separable convolution feature learning for ihomogeneous rock image classification. In: Sun, F., Liu, H., Hu, D. (eds.) ICCSIP 2018. CCIS, vol. 1005, pp. 165–176. Springer, Singapore (2019). https://doi.org/10.1007/978-981-13-7983-3_15
Qiu, J., Wang, J., Yao, S., et al.: Going deeper with embedded FPGA platform for convolutional neural network. In: ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 26–35 (2016)
Yang, Y., et al.: Synetgy: algorithm-hardware co-design for convnet accelerators on embedded FPGA. In: Proceedings of ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 23–32 (2019)
Wu, T., Liu, W., Jin, Y.: An end-to-end solution to autonomous driving based on xilinx FPGA. In: International Conference on Field-Programmable Technology, pp. 427–430. IEEE (2019)
Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D.H.J., Hawk, S.T., Van Knippenberg, A.D.: Presentation and validation of the Radboud faces database. Cogn. Emot. 24(8), 1377–1388 (2010)
Goodfellow, I.J., et al.: Challenges in representation learning: a report on three machine learning contests. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8228, pp. 117–124. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-42051-1_16
Acknowledgement
This work is supported by the Hong Kong Innovation and Technology Commission and City University of Hong Kong (Project 7005230).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Fan, X., Jiang, M., Zhang, H., Li, Y., Yan, H. (2021). Quantized Separable Residual Network for Facial Expression Recognition on FPGA. In: Sun, F., Liu, H., Fang, B. (eds) Cognitive Systems and Signal Processing. ICCSIP 2020. Communications in Computer and Information Science, vol 1397. Springer, Singapore. https://doi.org/10.1007/978-981-16-2336-3_1
Download citation
DOI: https://doi.org/10.1007/978-981-16-2336-3_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-2335-6
Online ISBN: 978-981-16-2336-3
eBook Packages: Computer ScienceComputer Science (R0)