Al Powered Real-Time Sign Language Detection and Translation System for Inclusive Communication Between Deaf and Hearing Communities Worldwide
1st M. Vasuki1, 2nd Dr.T.Amalraj Victorie2,3rd R .Rasiga3
1Associate Professor, Department of computer Applications,
Sri Manakula Vinayagar Engineering College (Autonomous), Puducherry 605008, India dheshna@gmail.com
2Associate Professor, Department of computer Applications,
Sri Manakula Vinayagar Engineering College (Autonomous), Puducherry 605008, India amalrajvictoire@gmail.com
3Post Graduate student, Department of computer Applications,
Sri Manakula Vinayagar Engineering College (Autonomous), Puducherry 605008, India
rasigabca123@gmail.com
Abstract - Sign language is a vital communication tool for individuals who are deaf or hard of hearing, yet it remains largely inaccessible to the wider population. This project aims to address this barrier by developing a sign language recognition system that converts hand gestures into text, followed by text-to-speech (TTS) conversion. The system utilizes Convolutional Neural Networks (CNNs) to recognize static hand gestures and translate them into corresponding textual representations. The text is then processed by a TTS engine, which generates spoken language, making it comprehensible to individuals who are not familiar with sign language.
The approach leverages deep learning techniques to improve gesture recognition accuracy, particularly in diverse real-world scenarios. By training the CNN on a comprehensive dataset of sign language gestures, the model is able to learn important features such as hand shape, orientation, and motion, which are critical for identifying specific signs.
Keywords: Sign Language Recognition-Gesture to Text-Text to Speech (TTS)-Convolutional Neural Networks (CNN)-Deep Learning-Hand Gesture Recognition-Assistive Technology-Real-Time Translation-Speech Synthesis-Accessibility-Inclusivity-Communication Aid-Deaf and Hard of Hearing-Human-Computer Interaction-Static Hand Gestures