Sign language is the native language of deaf and hearing-impaired people which they use on their daily life. Few interpreters are available to facilitate communication between deaf and vocal people. However, this is neither practical nor possible for all situations. Advances in information technology encouraged the development of systems that can facilitate the automatic translation between sign language and spoken language, and thus removing barriers facing the integration of deaf people in society.
Objective: The main objective of this paper is to present an
Arabic Sign Language recognition system that automatically recognizes 28 letters using a CNN model taking RGB images as input.
Methods: In this work, we propose a new framework based on convolutional neural networks, fed with a real dataset, which will automatically recognize letters of Arabic Sign Language. In order to validate our scheme, we have performed a comparative analysis that demonstrates the efficacy and robustness of our proposed method compared to conventional methods.
Results: we tested the CNN model on (10810 images), we measured the accuracy of the model as well as the error rate, the accuracy increases, the error decreases during the training and testing phases, and we achieved 92.9% of recognition accuracy.