Abstract
Dumb and hearing-impaired people are unable to communicate as well as normal people; thus, they must rely on sign language. Which is a visual or gestural form of communication. Unfortunately, sign language is neither common nor easy to learn. Consequently, deaf-mute people encounter many challenges in their daily communication. Hence, we proposed an intelligent system that employs a vision-based approach dedicated to Arabic sign language recognition (ArSLR). The system is aimed to recognize Arabic words expressed in dynamic sign language expressions and translate them into textural form. While maintaining natural and flexible translation, the system doesn’t impose any hardware requirements, colored gloves, or limitations on the background. A custom dataset is utilized in the development process of the system. A significant amount of experimental work has been conducted to come up with effective and generalized algorithms for image processing, feature extraction, feature selection, and practical classification. Satisfactory results are achieved with the use of linear discriminant analysis (LDA) as a feature selection and dimensionality reduction method; where 100% accuracy is achieved with support vector machine (SVM) and logistic regression classification algorithms. An accuracy of 99.9% is obtained with a multilayer perceptron (MLP) feed-forward artificial neural network and 99.6% with a decision tree classification. The impact of using principal component analysis (PCA) and LDA as a feature selection approach instead of raw data features is also presented. Finally, the system is validated in real-time with unseen data. It is evident from the validation and testing results that word-level ArSLR can be achieved robustly and accurately.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Yang, Z., Tai, Y.-M., Shen, X., Shi, Z.: SF-Net: structured feature network for continuous sign language recognition (2019)
Elmahgiubi, M., Ennaja, M., Drawil, N., Elbuni, M.: Sign language translator and gesture recognition (2015)
Ambar, R., Fai, C.K., Abd Wahab, M.H., Abdul Jamil, M.M., Ma’radzi, A.A.: Development of a wearable device for sign language recognition. J. Phys. Conf. Ser. 1019, 012017 (2018)
Abdel-Rabouh, A.S.A., Elmisery, F.A., Brisha, A.M., Khalil, A.H.: Arabic sign language recognition using Kinect sensor. Res. J. Appl. Sci. Eng. Technol. 15(2), 57–67 (2018)
Elhagry, A., Gla, R.: Egyptian sign language recognition using CNN and LSTM (2017)
Ismail, M.H., Dawwd, S.A., Ali, F.H.: Dynamic hand gesture recognition of Arabic sign language by using deep convolutional neural networks, February 2022
Al-Hammadi, M., Muhammad, G., Abdul, W., Alsulaiman, M., Bencherif, M.A., Mekhtiche, M.A.: Hand gesture recognition for sign language using 3DCNN, 27 April 2020
Ibrahim, N.B., Selim, M.M., Zayed, H.H.: An automatic Arabic sign language recognition system (ArSLRS), October 2018
Zivkovic, Z., Heijden, F.: Efficient adaptive density estimation per image pixel for the task of background subtraction, 17 August 2005
Moryossef, A.: Evaluating the immediate applicability of pose estimation for sign language recognition (2021)
https://google.github.io/mediapipe/solutions/holistic.html. Accessed 29 June 2022
Bishop, C.M. (ed.): Pattern Recognition and Machine Learning. ISS, Springer, New York (2006). https://doi.org/10.1007/978-0-387-45528-0
Kingma, D.P., Adam, J.B.: A method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, pp. 1–15 (2015)
Mantovani, R., Horváth, T., Cerri, R., Junior, S.B., Vanschoren, J., Carvalho, A.C.P.: An empirical study on hyperparameter tuning of decision trees, 12 February 2019
Silhouettes, P.R.: A graphical aid to the interpretation and validation of cluster analysis (1986)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Elgergeni, S., Drawil, N. (2024). Real Time Arabic Sign Language Recognition Using Machine Learning: A Vision - Based Approach. In: Benmusa, T.A.T., Elbuni, M.S., Saleh, I.M., Ashur, A.S., Drawil, N.M., Ellabib, I.M. (eds) Information and Communications Technologies. ILCICT 2023. Communications in Computer and Information Science, vol 2097. Springer, Cham. https://doi.org/10.1007/978-3-031-62624-1_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-62624-1_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-62623-4
Online ISBN: 978-3-031-62624-1
eBook Packages: Computer ScienceComputer Science (R0)