Hand Gesture Recognition and Text Conversion Using Convolutional Neural Networks
1Pooja Bhamare, 2Aishwarya Kumbhakarna, and 3Amruta Shinde
1,2,3 Student CSE, Zeal College of Engineering and Research, Pune, India
4Prof. R. R. Jadhav
Assistant Professor, Zeal College of Engineering and Research, Pune India.
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Sign language is an important means of communication for people with speech disabilities, but it presents significant challenges for non-signers due to a widespread lack of interpreters and awareness. This paper explores the development of a hand sign understanding and translation system using convolutional neural networks (CNN) to bridge the communication gap between hearing and deaf communities. Our research focuses on a three-step methodology: data collection, model training and extensive evaluation. Using a custom CNN architecture, our system can detect and convert hand gestures into real-time text, providing a complete communication solution. The methodology includes a dataset specially curated for this purpose, and the training phase uses the MNIST dataset to initially calibrate the model. Our system demonstrates a remarkable 95.7% accuracy in recognizing the 26 letters of the American Sign Language (ASL) alphabet, demonstrating its potential to facilitate seamless communication between signers and non-signers. This advance highlight the promising application of deep learning methods to improve accessibility and inclusion in deaf and hearing communities.
Key Words: Sign Language, ASL, Hearing disability, Convolutional Neural Network (CNN), Computer Vision, Machine Learning, Gesture recognition, Sign language recognition, Hue Saturation Value algorithm