Abstract
Open-set methods are crucial for rejecting unknown facial expressions in real-world scenarios. Traditional open-set methods primarily rely on a single feature vector for constructing the centers of known facial expression categories, which limits their ability to discriminate unknown categories. To address this problem, we propose the OpenFE method. This method introduces an attention mechanism that focuses on critical regions to improve the quality of feature vectors. Simultaneously, reconstruction methods are employed to extract low-dimensional potential features from images. By enriching the feature representation of known categories, the OpenFE method significantly amplifies the differentiation between unknown and known facial categories. Extensive experimental validation demonstrates the exceptional performance of the OpenFE method in expression open set classification, confirming its robustness.
Similar content being viewed by others
Availability of data and materials
All data generated or analyzed during this study are included in this published article and its supplementary information files.
References
Sariyanidi, E., Gunes, H., Cavallaro, A.: Learning bases of activity for facial expression recognition. IEEE Trans. Image Process. 26(4), 1965–1978 (2017)
Shao, J., Luo, Y.: Tamnet: two attention modules-based network on facial expression recognition under uncertainty. J. Electr. Imaging 30(3), 033021 (2021)
Wasi, A.T., Serbetar, K., Islam, R., Rafi, T.H., Chae, D.K.: Arbex: Attentive feature extraction with reliability balancing for robust facial expression learning. arXiv e-prints, pages arXiv–2305, (2023)
Xue, F., et al.: Vision transformer with attentive pooling for robust facial expression recognition. IEEE Trans. Affect. Comput. (2022). https://doi.org/10.1109/TAFFC.2022.3226473
Wen, Z., Lin, W., Wang, T., Ge, X.: Distract your attention: multi-head cross attention network for facial expression recognition. Biomimetics 8(2), 199 (2023)
Jie, S., Yongsheng, Q.: Multi-view facial expression recognition with multi-view facial expression light weight network. Patt. Recogn. Image Anal. 30, 805–814 (2020)
Cowen, A.S., Keltner, D., Schroff, F., Jou, B., Adam, H., Prasad, G.: Sixteen facial expressions occur in similar contexts worldwide. Nature 589(7841), 251–257 (2021)
Jia, Z., Chen, D.: Recognition method of unspecified face expressions based on machine learning. Int. J. Biometr. 14(3–4), 383–393 (2022)
Zhang, P., Wang, J., Farhadi, A., Hebert, M., Parikh, D.,: Predicting failures of vision systems. In :Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3566–3573 (2014)
Bendale, A., Boult, T.E.: Towards open set deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1563–1572 (2016)
Liu, D., Dang, Z., Chunlei P., Yu., Z., Shuang, L., Wang, N., Gao, X.: Fedforgery: generalized face forgery detection with residual federated learning. IEEE Trans. Inf. Forens. Secur. (2023)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Jiang, F., Huang, Q., Mei, X., Guan, Q., Yaxin, T., Luo, W., Huang, C.: Face2nodes: Learning facial expression representations with relation-aware dynamic graph convolution networks. Inf. Sci. 649, 119640 (2023)
Nguyen, X.B., Duong, C.N., Li, X., Gauch, S., Seo, H.S., Luu, K.: Micron-bert: Bert-based facial microexpression recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1482–1492 (2023)
Xiao, T., Xu, Y., Yang, K., Zhang, J., Peng, Y., Zhang, Z.: The application of two-level attention models in deep convolutional neural network for fine-grained image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 842–850, (2015)
Woo, S., Park, J., Lee, J.-Y., Kweon, I.: So: Cbam: Convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 3–19, (2018)
Bello, I., Zoph, B., Vaswani, A., Shlens, J., Le, Q.V.: Attention augmented convolutional networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 3286–3295 (2019)
Li, F., Wechsler, H.: Open set face recognition using transduction. IEEE Trans. Patt. Anal. Mach. Intell. 27(11), 1686–1697 (2005)
Sun, Y., Wang, X., Tang, X.: Deep learning face representation from predicting 10,000 classes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1891–1898, (2014)
Sun, Y., Liang, D., Wang, X., Tang, X.: Deepid3: Face Recognition with Very Deep Neural Networks. arXiv preprint arXiv:1502.00873, (2015)
Liu, D., Gao, X., Peng, C., Wang, N., Li, J.: Heterogeneous face interpretable disentangled representation for joint face recognition and synthesis. IEEE Trans. Neural Netw. Learn. Syst. 33(10), 5611–5625 (2021)
Liu, D., Zheng, Z., Peng, C., Wang, Y., Wang, N., Gao, X.: Hierarchical forgery classifier on multi-modality face forgery clues. IEEE Trans. Multim. (2023). https://doi.org/10.1109/TMM.2023.3304913
Wen, Y., Zhang, K., Li, Z., Qiao, Y.: A discriminative feature learning approach for deep face recognition. In: Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part VII 14, pp. 499–515. Springer, (2016)
Schroff, F., Kalenichenko, D., Philbin, J.: Facenet: A unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823, (2015)
Ge, Z.Y., Demyanov, S., Chen, Z., Garnavi, R.: Generative Openmax for Multi-class Open Set Classification. arXiv preprint arXiv:1707.07418, (2017)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Thoma, M.: The hasyv2 Dataset. arXiv preprint arXiv:1701.08380, (2017)
Zhou, D.-W., Ye, H.-J., Zhan, D.-C.: Learning placeholders for open-set recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4401–4410, (2021)
Jiang, G., Zhu, P., Wang, Yu., Hu, Q.: Openmix+: revisiting data augmentation for open set recognition. IEEE Trans. Circ. Syst. Video Technol. (2023). https://doi.org/10.1109/TCSVT.2023.3268680
Bendale, A., Boult, T.: Towards open world recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1893, (2015)
Hongzhi Huang, Yu., Wang, Q.H., Cheng, M.-M.: Class-specific semantic reconstruction for open set recognition. IEEE Trans. Patt. Anal. Mach. Intell. 45(4), 4214–4228 (2022)
Xia, Z., Wang, P., Dong, G., Liu, H.: Spatial location constraint prototype loss for open set recognition. Comput. Vision Image Underst. 229, 103651 (2023)
Wang, Z., Qianqian, X., Yang, Z., He, Y., Cao, X., Huang, Q.: Openauc: towards AUC-oriented open-set recognition. Adv. Neural Inf. Process. Syst. 35, 25033–25045 (2022)
Moon, W.J., Park, J., Seong, H.S., Cho, C.-H., Heo, J.-P.: Difficulty-aware simulator for open set recognition. In: European Conference on Computer Vision, pp. 365–381. Springer, (2022)
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
All authors have the same contribution. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Shao, J., Song, Z., Wu, J. et al. OpenFE: feature-extended OpenMax for open set facial expression recognition. SIViP 18, 1355–1364 (2024). https://doi.org/10.1007/s11760-023-02843-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-023-02843-1