Multi Sign Language Recognition
Dr. Kamini Nalavade1, Dr. Pallavi Baviskar2, Mayank Katiyara3,Hitesh Paighan4,Devendra Chaudhari 5, Sanketgir Gosavi6.
1 HOD, Department of Computer Engineering Sandip Institute of Engineering and
Management, Nashik,, India.
2 Assistant Professor, Department of Computer Engineering Sandip Institute of Engineering and
Management, Nashik,, India.
3 4 5 6 Research Scholar, Department of Computer Engineering Sandip Institute of Engineering and
Management, Nashik,, India.
---------------------------------------------------------------------***--------------------------------------------------------------------
Abstract:
The deaf and hard-of-hearing community often experiences communication barriers due to people who are not well-versed in the use of sign language.[1] This project would address this problem by developing an all-embracing machine learning (ML) model which would be able to interpret and translate the hand movements of American Sign Language (ASL) and Indian Sign Language (ISL) into corresponding spoken or written language in real time.[2] In using data from cameras, the system is designed to precisely predict and translate gestures both ASL and ISL. The solution integrates Natural Language Processing for context-aware translations and hence lets the system interpret the surrounding context for more meaningful and accurate implications. Moreover, the final project will explore the utilization of advanced Gesture recognition technologies, which might include the deep learning models and recurrent neural networks, shall enhance the system's accuracy as well as adaptability.[3] Data input multimodality will enhance gesture recognition: examples include skeletal tracking and hand shape analysis. User-friendly interface; it will be made possible for the users to interact with customization options of settings, preferences, including but not limited to language voice options.[6]
Moreover, the project includes the potential for integration with other assistive technologies, such as speech-to-text devices and AR equipment among others, to provide a comprehensive communication assistant.
At the end of it all, this project would empower the deaf and hard-of-hearing community by ensuring effective communication between the hearing population and the deaf and hard-of-hearing, social inclusion and access. The solution will be plentiful in schools, the workplaces, and public services, so all will be provided with the potentially useful tool of increasing accessibility and understanding.