Multilingual Translation of Hand Gestures using Machine Learning
Vaishnavi Karanjkar1, Rutuja Bagul2, Rushali Shirke3, Raj Ranjan Singh4, Prof. Pallavi Bhaskare5
1Department of Computer Engineering, STE’S Smt. Kashibai Navale College of Engineering 2Department of Computer Engineering, STE’S Smt. Kashibai Navale College of Engineering 3Department of Computer Engineering, STE’S Smt. Kashibai Navale College of Engineering 4 Department of Computer Engineering, STE’S Smt. Kashibai Navale College of Engineering
5 Professor, Department of Computer Engineering, STE’S Smt. Kashibai Navale College of Engineering
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Sign Language is mainly used by deaf (hard hearing) and dumb people to exchange information between their own community and with other people. It is a language where people use their hand gestures to communicate as they can't speak or hear. The goal of sign language recognition (SLR) is to identify acquired hand motions and to continue until related hand gestures are translated into text. Here, static and dynamic hand gestures for sign language can be distinguished. The human community values both types of recognition, even if static hand gesture recognition is easier than dynamic hand gesture recognition. By creating Deep Neural Network designs where the model will learn to detect the hand motions images throughout an epoch, we are using Deep Learning Computer Vision to recognize the hand gestures. After the model successfully recognizes the motion, an English text file is created that can subsequently be translated to another language. The user can choose from a variety of translations for this paragraph. This application can be used without an internet connection and is entirely offline. With this model's improved efficiency, communication will be easier for the deaf (hard of hearing) and disabled people. We shall discuss the use of deep learning for sign language recognition in this paper.
Key Words: sign language, deep neural network, computer vision, hand gesture.